pip install mellea,
Ollama running locally. LangChain interop requires pip install langchain-community.
Note: An agent is a generative program in which an LLM determines the control flow of the program. The patterns in this page range from simple one-shot tool use to goal-driven agentic loops.
Defining tools with @tool
The @tool decorator turns a regular Python function into a tool the LLM can call.
Mellea uses the function’s docstring and type hints to build the tool schema:
@tool(name="...") to override the tool name as it appears to the model:
.run() method for direct invocation without going through
the LLM:
Passing tools to instruct()
Pass tools via ModelOption.TOOLS. The model can then choose to call them:
Requiring a tool call
Use theuses_tool requirement to enforce that the model actually calls a specific
tool:
tool_calls=True, the result exposes a .tool_calls dict you can inspect and
execute:
Validating tool arguments
tool_arg_validator adds fine-grained validation over the arguments the model
generates for a tool call:
LangChain and smolagents interop
Import tools directly from LangChain or smolagents. Install the required packages first:uv pip install langchain-community ddgs.
MelleaTool.from_smolagents() works the same way for smolagents tools.
ReACT agent
react() is a built-in goal-driven agentic loop. It iteratively selects and calls
tools until the goal is met or a step budget is reached:
react() can return a structured Pydantic object by passing a format parameter:
Advanced: The core idea of ReACT is to alternate between reasoning (“Thought”) and acting (“Action”) in a loop: generate a thought, choose an action, supply arguments, observe the tool output, then check whether the goal is achieved. Mellea’sreact()implements this loop usingchat()with structured output at each step, backed by@generativefor constrained argument selection. You can build a custom ReACT-style loop by hand using the same primitives — seemellea.stdlib.components.reactfor reference.
Code interpreter
Mellea includes a built-in Python code interpreter tool:local_code_interpreter as a tool to instruct() to let the LLM write and
execute code. Combine with uses_tool and tool_arg_validator to constrain what
gets generated (see examples above).
Warning: local_code_interpreter executes Python code in the current process.
Do not use it in production contexts without sandboxing.
See also: Tutorial 04: Making Agents Reliable | Instruct, Validate, Repair