pip install mellea, Ollama running locally.
instruct() is the primary API in Mellea. It builds a structured Instruction
component — not a raw chat message — with a description, requirements, user variables,
grounding context, few-shot examples, and images. The instruction is rendered through
Jinja2 templates and run through an instruct–validate–repair (IVR) loop by default.
Basic instruct()
instruct() returns a ModelOutputThunk. Access the result as a string with
str(email) or via email.value.
User variables
Embed dynamic values in your description using{{double_braces}}. The description
is a Jinja2 template; values are injected at generation time via user_variables:
{{var}} syntax anywhere
in the instruction description or requirement strings.
Requirements
Requirements are declarative constraints. They serve two purposes:- They are embedded in the prompt so the model knows what to aim for.
- They are checked after generation; if any fail, the IVR loop asks the model to repair its output.
Custom validation functions
For deterministic checks, attach avalidation_fn to a Requirement:
simple_validate wraps a callable that returns a bool (or a (bool, str) tuple
with a failure reason) into a validation function.
Shorthand helpers
req() and check() are concise constructors for Requirement:
req(description)— creates aRequirementwith an optionalvalidation_fncheck(description)— alias forreq(), reads naturally for boolean constraints
Sampling strategies and the IVR loop
By default,instruct() uses RejectionSamplingStrategy(loop_budget=2): it
generates once, validates all requirements, and retries up to two times if any fail.
Configure the loop explicitly with strategy:
return_sampling_results=True, instruct() returns a SamplingResult instead
of a ModelOutputThunk. This lets you inspect whether validation passed and access
all intermediate generations.
Advanced: SOFAI (SOFAISamplingStrategy) is a dual-model strategy that routes
between a fast and a slow model based on confidence. See
Inference-Time Scaling.
Grounding context
Attach reference documents to an instruction for retrieval-augmented generation:grounding_context maps string keys to document text. The keys are arbitrary
labels — they appear in the prompt as [key] = value so the model can reference
them by name, but there is no required naming convention (e.g. "doc0", "annual_report",
"spec" all work). See Working with Data for richer
document handling using MObjects and RichDocument.
ICL examples
In-context learning (ICL) examples provide few-shot demonstrations. They are rendered as input–output pairs inside theInstruction component’s Jinja2 template, giving the
model concrete examples to follow.
Note (review needed): Theinstruct()icl_examplesparameter API needs verification against the current source before documenting the full signature here.
Images
Pass images toinstruct() with the images parameter. Accepts both Mellea
ImageBlock and PIL images:
Backend note: Vision requires a model that supports image inputs (e.g.,
qwen2.5vl:7b via the OpenAI backend). The default Ollama/Granite setup does not
support images.
Multi-turn with ChatContext
instruct() works with ChatContext for stateful multi-turn conversations:
ChatContext accumulates turns. SimpleContext (the default) discards the previous
turn on each call.
chat() vs instruct()
chat() is a lighter-weight alternative that sends a plain message with no
requirements and no sampling strategy:
chat() for conversational back-and-forth where you don’t need the IVR machinery.
Use instruct() when you want requirements, validation, or structured output.