Hello world
By default,start_session() connects to Ollama and uses IBM Granite 4 Micro
(granite4:micro). Make sure Ollama is running before you run this:
instruct() call returns a
ModelOutputThunk; call str() on it (or access .value) to get the string.
Full example: docs/examples/tutorial/simple_email.py
User variables
Embed dynamic values in instructions using{{double_braces}}. The description is
treated as a Jinja2 template:
Requirements
Pass a list of plain-English requirements to constrain the output. Mellea runs an instruct–validate–repair loop: if any requirement fails, it asks the model to fix its output:instruct() API.
Core concepts
Sessions —MelleaSession is the main entry point. start_session() creates one
with defaults: Ollama backend, Granite 4 Micro, SimpleContext (single-turn).
Instructions — instruct() builds a structured Instruction component, not a
raw chat message. It supports a description, requirements, user variables, grounding
context, and few-shot examples.
Contexts — SimpleContext holds a single turn. ChatContext accumulates turns for
multi-turn conversations. Pass ctx=ChatContext() to start_session() for stateful
chat.
Backends — Pluggable model providers. Ollama is the default. OpenAI, LiteLLM,
HuggingFace, and WatsonX are also supported. See
Backends and Configuration.
Troubleshooting
granite4:micro not found — run ollama pull granite4:micro before starting.
Python 3.13 outlines install failure — outlines requires a Rust compiler.
Either install Rust or pin Python to 3.12.
Intel Mac torch errors — create a conda environment and run
conda install 'torchvision>=0.22.0', then uv pip install mellea inside it.