Looking to use this in code? See Write Custom Verifiers for practical examples and API details.Requirements are Mellea’s mechanism for enforcing constraints on generative output. They serve two roles simultaneously: they appear in the prompt so the model knows what to aim for, and they are evaluated after generation so Mellea can detect and repair failures automatically. This page explains the requirements system in depth. For a quick introduction, see The Instruction Model.
What a requirement is
ARequirement is a Component that wraps a natural-language description and an
optional validation function. During the instruct–validate–repair (IVR) loop:
- Mellea renders the requirement descriptions into the prompt alongside the instruction.
- After the model generates output, each requirement is validated against that output.
- If any requirement fails, Mellea sends the model a repair request, listing which requirements failed and why.
- The loop retries up to
loop_budgettimes (default: 2).
instruct() is equivalent — they are
converted to Requirement objects internally:
req() and check() shorthands
req() and check() are concise constructors from mellea.stdlib.requirements:
check_only=True, the requirement description is
evaluated after generation but not embedded in the prompt. This avoids the
purple elephant effect — where
mentioning something in a negative instruction (e.g., “do not mention purple
elephants”) paradoxically increases the chance the model produces it.
Use req() for positive constraints you want the model to aim for. Use check() for
negative or hard-to-explain constraints that are better left out of the prompt.
Custom validation functions
For deterministic checks, attach avalidation_fn. Mellea skips LLM-as-a-judge and
runs your function directly:
simple_validate is a convenience wrapper. It accepts a function that receives the
most recent model output as a string and returns either:
bool— pass or fail; no reason is capturedtuple[bool, str]— pass/fail plus a reason string that Mellea includes in the repair request
ValidationResult in depth
simple_validate produces ValidationResult objects automatically. When you write
a full validation function directly, you construct ValidationResult yourself:
validation_fn signature is Callable[[Context], ValidationResult]. The
Context object gives you access to the full session state if needed — not just the
last output.
ValidationResult fields:
| Field | Type | Description |
|---|---|---|
result | bool | Whether the requirement passed. |
reason | str | None | Human-readable explanation, included in repair requests. |
score | float | None | Optional numeric score from your validator. |
thunk | ModelOutputThunk | None | The model output used, if your validator ran a backend call. |
context | Context | None | The context snapshot at validation time. |
reason field is the most useful in practice — a clear reason string helps the
model make a targeted repair rather than regenerating blindly.
Preconditions in generative functions
The@generative decorator supports precondition_requirements alongside the
standard requirements. Preconditions are validated against the inputs to the
function before generation starts. If they fail, Mellea raises PreconditionException
immediately — no generation attempt is made and no IVR loop runs.
PreconditionException.validation is a list of ValidationResult objects for every
requirement that failed, giving you a complete picture of what went wrong.
Note:precondition_requirementsrequire a strategy to be specified (e.g.,RejectionSamplingStrategy()). Without a strategy the precondition check is skipped with a warning.
Inspecting validation results
When you usereturn_sampling_results=True, instruct() returns a SamplingResult
instead of a ModelOutputThunk. This exposes per-attempt validation results:
SamplingResult.sample_validations is a list of attempts, each containing a list
of (Requirement, ValidationResult) tuples. SamplingResult.result_validations
gives you the same for the final selected output only.
LLM-as-a-judge vs custom validators
| Approach | When to use |
|---|---|
| Plain string requirement | Subjective or hard-to-code constraints (“be polite”, “stay on topic”). |
simple_validate(lambda ...) | Simple deterministic checks (length, regex, JSON parse). |
Full validation_fn | Multi-step logic, external API calls, or access to session context. |
ALoraRequirement | Fine-tuned constraint LoRA — fastest at scale, requires adapter. |
simple_validate for deterministic checks and
reserve LLM-based requirements for subjective criteria that cannot be coded directly.
Advanced:ALoraRequirement(frommellea.stdlib.requirements) uses a fine-tuned LoRA adapter for validation instead of LLM-as-a-judge. It falls back to LLM-as-a-judge if the adapter is unavailable. See LoRA and aLoRA Adapters.
Composing requirements
Requirements are composable: mix strings,req(), check(), and Requirement
objects freely in the same list: