Skip to main content
Mellea mascot

Mellea helps you manage the unreliable part of every AI-powered pipeline: the LLM call itself. It replaces ad-hoc prompt chains and brittle agents with structured generative programs — Python code where LLM calls are first-class operations governed by type annotations, requirement verifiers, and principled repair loops.

uv pip install mellea

How Mellea works

Mellea’s design rests on three interlocking ideas.

Key patterns

Backends

Mellea is backend-agnostic. The same program runs on any inference engine. See Backends and configuration for the full list of supported backends and how to configure them.

How-to guides


GitHub · PyPI · Discussions