Why does start_session() fail with a connection error?
Mellea’s default backend is Ollama. If Ollama is not running, any call that
reaches the backend raises a connection error:
How do I use a model other than granite4:micro?
Pass the model_id parameter to start_session():
MelleaSession for full
control over backend options:
Can I use Mellea without Ollama?
Yes. Ollama is the default backend but not the only one. Mellea ships with backends for OpenAI-compatible APIs, HuggingFace local inference, IBM WatsonX, and LiteLLM (which itself proxies dozens of providers). Install the backend you need:start_session() or MelleaSession:
Why does my @generative function return the wrong type?
The @generative decorator uses the function’s docstring as the prompt. If the
docstring is vague, the model may return output that cannot be parsed into the
declared return type.
Compare these two definitions:
ComponentParseError, add an example to the docstring
— the model needs a concrete illustration of the expected format.
What is the difference between instruct() and @generative?
Both call the LLM, but they differ in when you write the prompt and how you
pass variables.
instruct() takes a prompt string with {{variable}} placeholders at call
time. It is best for one-off instructions where the prompt text varies:
@generative defines the prompt once in the function’s docstring. It is best
when you want a reusable, typed, unit-testable function:
@generative functions also participate in Mellea’s lazy evaluation graph,
which means you can feed a thunk from one generative call into another before
either has been evaluated.
Why do requirements keep failing?
When the model keeps retrying but the output looks correct, one of the following is usually the cause:- The requirement is too strict. A requirement like “Must be exactly 17 syllables” is difficult for a model to satisfy reliably. Relax the constraint or provide the model with more context.
-
The default budget is too low.
instruct()defaults toloop_budget=2. Increase it: -
The validation function is wrong. If you are using a custom verifier,
check it returns
Truefor valid output. Usereturn_sampling_results=Trueto inspect each attempt:
How do I see what the model is actually receiving?
UseGenerateLog to capture the rendered prompt. Enable application tracing or
backend tracing and check the response and gen_ai.usage.input_tokens
attributes on the spans.
For a quick local inspection without a trace backend, enable console tracing:
GenerateLog objects returned with sampling results:
Does Mellea support async?
Yes. Every synchronous method has an async counterpart:| Sync | Async |
|---|---|
m.chat() | await m.achat() |
m.instruct() | await m.ainstruct() |
m.act() | await m.aact() |
mfuncs.act() | await mfuncs.aact() |
@generative functions work in async context when you await them:
Note: If you are inside a Jupyter notebook, the event loop is already running. Useawaitdirectly or installnest_asyncioto allow nested loops.
How do I contribute?
Read the contributing guide first:- Fork the repository and clone it.
- Install dependencies:
uv sync --all-extras --all-groups - Install pre-commit hooks:
pre-commit install - Create a branch:
git checkout -b feat/your-feature - Run tests:
uv run pytest -m "not qualitative" - Open a pull request.
feat:, fix:, docs:, refactor:). Pre-commit
runs ruff, mypy, and codespell automatically.
Where can I get help?
- GitHub Issues: Report bugs and request features at the project’s GitHub Issues page.
- GitHub Discussions: Ask questions and share ideas in the Discussions tab.
- Examples: The
docs/examples/directory contains runnable examples covering every major feature. - Common Errors: See Common Errors for a reference table of known error messages and fixes.
See also
- Common Errors — a reference table of error messages, diagnostic steps, and fixes.
- Quick Start — install Mellea and run your first generative function.