OpenAIBackend connects Mellea to the OpenAI API and to any server that implements
the OpenAI HTTP API — including LM Studio, Ollama’s OpenAI endpoint, vLLM, and
OpenAI-compatible providers.
Prerequisites: pip install mellea, a valid API key for the OpenAI API or a
local OpenAI-compatible server running.
OpenAI API
Set your API key as an environment variable (recommended):Note: Never commit API keys to source control. Use environment variables or a secrets manager in production.
OpenAI-compatible local servers
OpenAIBackend works with any server that implements the OpenAI HTTP API. No real
API key is needed for local servers — pass any non-empty string:
LM Studio
Ollama’s OpenAI endpoint
vLLM
Using base_url from the environment
Set OPENAI_BASE_URL to avoid repeating the base URL in your code:
base_url and api_key constructor parameters take precedence over environment
variables if both are set.
Vision and multimodal input
OpenAIBackend supports image inputs for vision-capable models. Pass a PIL image
or a Mellea ImageBlock:
Image objects directly without wrapping them:
Backend note: Vision requires a model that supports image inputs (e.g.,gpt-4o,qwen2.5vl:7b). Text-only models will raise an error if images are passed.
Structured output with format
Use the format parameter to constrain generation to a Pydantic schema:
Model options
Set generation parameters withModelOption:
instruct()
or chat() apply to that call only and take precedence.
Anthropic via OpenAI-compatible endpoint
Anthropic’s API is not OpenAI-compatible natively, but if you access it through a proxy that exposes an OpenAI-compatible interface, you can useOpenAIBackend:
Note (review needed): Direct Anthropic API compatibility via this path has not been verified against the current Mellea version. If you are using Anthropic, LiteLLM provides a verified integration — see Backends and Configuration.
Troubleshooting
OPENAI_API_KEY not set error
Either export the environment variable or pass api_key directly to OpenAIBackend.
For local servers, pass any non-empty string (e.g., api_key="local").
Connection refused at custom base_url
Confirm the local server is running and listening on the expected port. For Ollama,
run ollama serve; for LM Studio, start the local server from the LM Studio UI.
Model not found
The model string must exactly match the name your server recognises. For OpenAI, refer to the OpenAI models page. For local servers, list available models from the server’s API or UI.See also: Backends and Configuration | Enforce Structured Output