Skip to main content

Python SDK API Reference

Complete API reference for the Mentiora Python SDK.

MentioraClient

Main client class for interacting with the Mentiora platform.

Constructor

MentioraClient(config: MentioraConfig)

MentioraConfig

OptionTypeRequiredDescription
api_keystrYesProject API key (from Mentiora platform)
environment'staging' | 'production'YesTarget environment
project_idstrNoProject UUID (optional)
base_urlstrNoOverride base URL (for testing)
timeoutintNoRequest timeout in ms (default: 30000)
retriesintNoMax retry attempts (default: 3)

Properties

tracing

Access to tracing functionality.

client.tracing: TracingClient

Methods

close() -> None

Close HTTP clients and cleanup resources.

client.close()

aclose() -> None

Close async HTTP clients and cleanup resources.

await client.aclose()

TracingClient

Client for sending traces to the Mentiora platform.

Methods

send_trace(event: TraceEvent) -> SendTraceResult

Send a trace event to the platform (synchronous).

Parameters:

  • event: TraceEvent - The trace event to send

Returns: SendTraceResult

Example:

result = client.tracing.send_trace(TraceEvent(
trace_id='trace-123',
span_id='span-456',
name='llm.call',
type='llm',
start_time=datetime.now(),
end_time=datetime.now(),
))

send_trace_async(event: TraceEvent) -> SendTraceResult

Send a trace event to the platform (asynchronous).

Parameters:

  • event: TraceEvent - The trace event to send

Returns: SendTraceResult

Example:

result = await client.tracing.send_trace_async(TraceEvent(
trace_id='trace-123',
span_id='span-456',
name='llm.call',
type='llm',
start_time=datetime.now(),
end_time=datetime.now(),
))

flush() -> None

Flush any pending traces in the queue (synchronous).

Returns: None

Example:

client.tracing.flush()

flush_async() -> None

Flush any pending traces in the queue (asynchronous).

Returns: None

Example:

await client.tracing.flush_async()

Types

TraceEvent

class UsageInfo:
prompt_tokens: int | None
completion_tokens: int | None
total_tokens: int | None

class TraceEvent:
trace_id: str # Unique trace ID (UUID v7 format)
span_id: str # Unique span ID (UUID v7 format)
parent_span_id: str | None # Parent span for nesting (UUID v7 format)
name: str # Span name, e.g., 'llm.call', 'tool.execute'
type: 'llm' | 'tool' | 'chat' | 'error' | 'custom'
input: dict | None # Prompt, tool input, etc.
output: dict | None # Response, tool result
start_time: datetime | str # ISO 8601 timestamp
end_time: datetime | str | None
duration_ms: int | None
metadata: dict[str, Any] | None
tags: list[str] | None
error: TraceError | None
usage: UsageInfo | None # Token usage (LLM-specific)
model: str | None # Model name (e.g., 'gpt-4', 'claude-3')
provider: str | None # Provider name (e.g., 'openai', 'anthropic')

Note: trace_id and span_id must be in UUID v7 format. The plugins automatically generate UUID v7 IDs.

TraceError

class TraceError:
message: str
type: str | None
stack: str | None

SendTraceResult

class SendTraceResult:
success: bool
trace_id: str
span_id: str
error: str | None

TraceType

TraceType = Literal['llm', 'tool', 'chat', 'error', 'custom']

Environment

Environment = Literal['staging', 'production']

Errors

ConfigurationError

Raised when the client configuration is invalid.

class ConfigurationError(MentioraError):
def __init__(self, message: str)

ValidationError

Raised when trace event data is invalid.

class ValidationError(MentioraError):
def __init__(self, message: str)

NetworkError

Raised when a network or HTTP error occurs.

class NetworkError(MentioraError):
def __init__(self, message: str, status_code: int | None = None)
status_code: int | None

MentioraError

Base exception for all Mentiora SDK errors.

class MentioraError(Exception):
def __init__(self, message: str, code: str)
message: str
code: str
name: str

Plugins

track_openai

Wraps an OpenAI client to automatically trace API calls.

def track_openai(
openai_client: OpenAI | AsyncOpenAI,
options: TrackOpenAIOptions
) -> OpenAI | AsyncOpenAI

Parameters:

  • openai_client: OpenAI | AsyncOpenAI - The OpenAI client instance to wrap (supports both sync and async clients)
  • options: TrackOpenAIOptions - Plugin configuration options

Returns: Wrapped OpenAI client with tracing enabled

TrackOpenAIOptions:

OptionTypeRequiredDescription
mentiora_clientMentioraClientYesMentiora client instance for sending traces
tagslist[str]NoOptional tags to add to all traces
metadatadict[str, Any]NoOptional metadata to add to all traces
project_idstrNoOptional project ID override

Example:

from mentiora import track_openai, TrackOpenAIOptions
from openai import AsyncOpenAI

tracked_client = track_openai(
openai_client,
TrackOpenAIOptions(
mentiora_client=mentiora_client,
tags=['production'],
),
)

MentioraTracingLangChain

Callback handler for automatically tracing LangChain executions.

class MentioraTracingLangChain(BaseCallbackHandler):
def __init__(self, options: MentioraTracingLangChainOptions)

MentioraTracingLangChainOptions:

OptionTypeRequiredDescription
mentiora_clientMentioraClientYesMentiora client instance for sending traces
tagslist[str]NoOptional tags to add to all traces
metadatadict[str, Any]NoOptional metadata to add to all traces
project_idstrNoOptional project ID override

Example:

from mentiora import MentioraTracingLangChain, MentioraTracingLangChainOptions

callback = MentioraTracingLangChain(MentioraTracingLangChainOptions(
mentiora_client=mentiora_client,
tags=['production'],
))

await chain.ainvoke({'input': '...'}, {'callbacks': [callback]})

Note: This class extends LangChain's BaseCallbackHandler and implements all required callback methods for tracing LLM calls, chain executions, tool calls, and agent operations.