m decompose takes a complex task description and uses an LLM to:
- Extract the constraints the output must satisfy
- Identify the subtasks needed to complete the goal, with dependency ordering
- Generate a prompt template for each subtask
- Output a ready-to-run Python script that executes each subtask in order
uv add mellea), Ollama running locally (or an OpenAI-compatible endpoint).
Basic usage
Write your task description to a text file, then run:Note: The output directory must already exist — the command will error if it does not. On first run with Ollama, the default model will be downloaded automatically (~15 GB for the full model). UseThis produces two files in--model-idwith a smaller model (e.g.granite4:micro) to avoid the large download.
./output/:
m_decomp_result.json— the full decomposition: subtask list, constraints, dependency graph, and prompt templatesm_decomp_result.py— a runnable Python script that callsm.instruct()for each subtask in dependency order
Example
Given atask.txt:
Backend options
m decompose defaults to Ollama with granite4:micro. Pass --backend and
--model-id to use a different inference engine:
Python API
Use the decompose pipeline directly from Python:result["subtasks"] has:
| Field | Description |
|---|---|
subtask | Description of the subtask |
tag | Short identifier used for dependency references |
depends_on | List of tag values this subtask depends on |
prompt_template | Ready-to-use prompt string for m.instruct() |
input_vars_required | Variables that must be filled in the template |
constraints | Constraints from the original prompt that apply here |
When to use m decompose
m decompose is useful when:
- A task prompt is too large or complex for a single LLM call
- The work can be broken into sequential or parallel subtasks
- You want a first-pass structure you can then edit by hand
- You are exploring how to decompose a problem before writing code
m.instruct() directly.
Full example:
docs/examples/m_decompose/
See also: Tools and Agents | Refactor Prompts with CLI