Skip to main content
Each example in this section is a complete, runnable Python program. The pages walk through the code section by section so you can see how the pieces fit together. Copy any example as a starting point for your own project.

Examples in this section

ExampleWhat it shows
Data extraction pipelineUse @generative with a typed return to pull structured data from unstructured text
Legacy code integrationApply @mify to existing Python classes so the model can act on them
Resilient RAG with fallbackBuild a FAISS retrieval pipeline with an LLM relevance filter before generation
Traced generation loopEnable OpenTelemetry application and backend traces with two environment variables

All example categories

The repository contains many more runnable examples than the four documented above. Every category has its own README.md and one or more .py files ready to run.

Core concepts

CategoryWhat it shows
instruct_validate_repair/The IVR loop end-to-end: basic generation, adding requirements, automatic repair on failure, custom validators
generative_slots/@generative functions with typed returns, pipeline composition, ChatContext persona injection, pre/postcondition checks
context/Context inspection, sampling with context trees, parallel context branches
sessions/Custom session types and backend selection

Data and documents

CategoryWhat it shows
information_extraction/Named entity recognition and type-safe structured extraction with Pydantic
mobject/Table queries and transformations using MObject structured data types
mify/@mify on existing classes — custom string representations, field filtering, funcs_include
rag/FAISS vector search, @generative bool relevance filter, grounding_context for grounded generation

Agents and tools

CategoryWhat it shows
agents/ReACT reasoning-and-acting loop, multi-turn tool workflows
tools/@tool definition, code interpreter integration, tool argument validation, safe eval patterns
mini_researcher/Complete research assistant: multi-model architecture, document retrieval, safety checks, custom validation pipeline

Safety and validation

CategoryWhat it shows
safety/GuardianCheck for harm, jailbreak, profanity, social bias, violence, and groundedness; shared backend pattern

Integration and deployment

CategoryWhat it shows
m_serve/Deploying Mellea programs as REST APIs with production deployment patterns
library_interop/LangChain message conversion, OpenAI format compatibility, cross-library workflows
mcp/MCP tool creation, Claude Desktop integration, Langflow integration
bedrock/Amazon Bedrock backend configuration and usage

Performance and advanced sampling

CategoryWhat it shows
aLora/Training aLoRA adapters for fast constraint checking; performance optimisation
intrinsics/Answer relevance, hallucination detection, citation validation, context relevance — specialised adapter-backed checks
sofai/Two-tier sampling: fast-model iteration with escalation to a slow model; cost optimisation

Multimodal

CategoryWhat it shows
image_text_models/Vision-language models, ImageBlock, multimodal prompting, backend support matrix

Observability

CategoryWhat it shows
telemetry/OpenTelemetry application and backend traces; span export configuration

Experimental

CategoryWhat it shows
melp/⚠️ Experimental lazy evaluation — thunks, deferred execution, advanced control flow

Running the examples

All examples are in the docs/examples/ directory of the repository. Unless otherwise noted, run them with:
python docs/examples/<folder>/<file>.py
Some examples declare inline script dependencies using the PEP 723 /// script block and can be run with uv run instead:
uv run docs/examples/<folder>/<file>.py
Default backend: start_session() with no arguments connects to a local Ollama instance running IBM Granite 4 Micro (granite4:micro). Make sure Ollama is running before you execute any example.