pip install "mellea[telemetry]", Ollama running locally.
Mellea provides built-in OpenTelemetry instrumentation
across three independent pillars — tracing, metrics, and logging. Each can be enabled
separately. All telemetry is opt-in: if the [telemetry] extra is not installed,
every telemetry call is a silent no-op.
Note: OpenTelemetry is an optional dependency. Mellea works normally without it. Install withpip install "mellea[telemetry]"oruv pip install "mellea[telemetry]".
Configuration
All telemetry is configured via environment variables:General
| Variable | Description | Default |
|---|---|---|
OTEL_SERVICE_NAME | Service name for all telemetry signals | mellea |
OTEL_EXPORTER_OTLP_ENDPOINT | OTLP endpoint for all telemetry signals | none |
Tracing variables
| Variable | Description | Default |
|---|---|---|
MELLEA_TRACE_APPLICATION | Enable application-level tracing | false |
MELLEA_TRACE_BACKEND | Enable backend-level tracing | false |
MELLEA_TRACE_CONSOLE | Print traces to console (debugging) | false |
Metrics variables
| Variable | Description | Default |
|---|---|---|
MELLEA_METRICS_ENABLED | Enable metrics collection | false |
MELLEA_METRICS_CONSOLE | Print metrics to console (debugging) | false |
MELLEA_METRICS_OTLP | Enable OTLP metrics exporter | false |
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT | Metrics-specific OTLP endpoint (overrides general) | none |
MELLEA_METRICS_PROMETHEUS | Enable Prometheus metric reader | false |
OTEL_METRIC_EXPORT_INTERVAL | Export interval in milliseconds | 60000 |
MELLEA_PRICING_FILE | Path to a JSON file with custom model pricing overrides | none |
Logging variables
| Variable | Description | Default |
|---|---|---|
MELLEA_LOGS_OTLP | Enable OTLP logs exporter | false |
OTEL_EXPORTER_OTLP_LOG_ENDPOINT | Logs-specific OTLP endpoint (overrides general) | none |
Quick start
Enable tracing and metrics with console output to verify everything works:Checking telemetry status programmatically
Tracing
Mellea has two independent trace scopes:mellea.application— user-facing operations: session lifecycle,@generativecalls,instruct()andact(), sampling strategies, and requirement validation.mellea.backend— LLM backend interactions following the OpenTelemetry Gen-AI Semantic Conventions. Records model calls, token usage, finish reasons, and API latency.
Metrics
Mellea automatically records the following metrics across all backends using OpenTelemetry. No code changes are required:- Token counters —
mellea.llm.tokens.inputandmellea.llm.tokens.outputafter each LLM call. - Latency histograms —
mellea.llm.request.duration(every request) andmellea.llm.ttfb(streaming requests only). - Error counter —
mellea.llm.errorson each failed backend call, classified by semantic error type. - Cost counter —
mellea.llm.cost.usdestimated request cost in USD, when pricing data is available for the model. - Sampling counters —
mellea.sampling.attempts,mellea.sampling.successes, andmellea.sampling.failuresper strategy. - Requirement counters —
mellea.requirement.checksandmellea.requirement.failuresper requirement type. - Tool counter —
mellea.tool.callsby tool name and status.
create_counter, create_histogram, and
create_up_down_counter for instrumenting your own application code.
Mellea supports three exporters that can run simultaneously:
- Console — print to stdout for debugging
- OTLP — export to production observability platforms
- Prometheus — register with
prometheus_clientfor scraping
Logging
Mellea uses a color-coded console logger (MelleaLogger) by default. When the
[telemetry] extra is installed and MELLEA_LOGS_OTLP=true is set, Mellea
also exports logs to an OTLP collector alongside existing console output.
See Logging for console logging
configuration, OTLP log export setup, and programmatic access via
get_otlp_log_handler().
Full example: docs/examples/telemetry/telemetry_example.py
See also: