Skip to main content
Core decomposition pipeline that breaks a task prompt into structured subtasks. Provides the decompose() function, which orchestrates a series of LLM calls (subtask listing, constraint extraction, validation strategy selection, prompt generation, and constraint assignment) to produce a DecompPipelineResult containing subtasks, per-subtask prompts, constraints, and dependency information. Supports Ollama, OpenAI-compatible, and RITS inference backends.

Functions

FUNC decompose

decompose(task_prompt: str, user_input_variable: list[str] | None = None, model_id: str = 'mistral-small3.2:latest', backend: DecompBackend = DecompBackend.ollama, backend_req_timeout: int = 300, backend_endpoint: str | None = None, backend_api_key: str | None = None) -> DecompPipelineResult
Break a task prompt into structured subtasks using a multi-step LLM pipeline. Orchestrates a series of sequential LLM calls to produce a fully structured decomposition: subtask listing, constraint extraction, validation strategy selection, prompt template generation, and per-subtask constraint assignment. The number of calls depends on the number of constraints extracted. Args:
  • task_prompt: Natural-language description of the task to decompose.
  • user_input_variable: Optional list of variable names that will be templated into generated prompts as user-provided input data. Pass None or an empty list if the task requires no input variables.
  • model_id: Model name or ID used for all pipeline steps.
  • backend: Inference backend — "ollama", "openai", or "rits".
  • backend_req_timeout: Request timeout in seconds for model inference calls.
  • backend_endpoint: Base URL of the OpenAI-compatible endpoint. Required when backend is "openai" or "rits".
  • backend_api_key: API key for the configured endpoint. Required when backend is "openai" or "rits".
Returns:
  • A DecompPipelineResult containing the original prompt, subtask list,
  • identified constraints, and fully annotated subtask objects with prompt
  • templates, constraint assignments, and dependency information.

Classes

CLASS ConstraintResult

A single constraint paired with its assigned validation strategy. Attributes:
  • constraint: Natural-language description of the constraint.
  • validation_strategy: Strategy assigned to validate the constraint; either "code" or "llm".

CLASS DecompSubtasksResult

The full structured result for one decomposed subtask. Attributes:
  • subtask: Natural-language description of the subtask.
  • tag: Short identifier for the subtask, used as a variable name in Jinja2 templates and dependency references.
  • constraints: List of constraints assigned to this subtask, each with a validation strategy.
  • prompt_template: Jinja2 prompt template string for this subtask, with \{\{ variable \}\} placeholders for inputs and prior subtask results.
  • input_vars_required: Ordered list of user-provided input variable names referenced in prompt_template.
  • depends_on: Ordered list of subtask tags whose results are referenced in prompt_template.
  • generated_response: Optional field holding the model response produced during execution; not present until the subtask runs.

CLASS DecompPipelineResult

The complete output of a decomposition pipeline run. Attributes:
  • original_task_prompt: The raw task prompt provided by the user.
  • subtask_list: Ordered list of subtask descriptions produced by the subtask-listing stage.
  • identified_constraints: Constraints extracted from the original task prompt, each with a validation strategy.
  • subtasks: Fully annotated subtask objects with prompt templates, constraint assignments, and dependency information.
  • final_response: Optional field holding the aggregated final response produced during execution; not present until the pipeline runs.

CLASS DecompBackend

Inference backends supported by the decomposition pipeline. Attributes:
  • ollama: Local Ollama inference server backend.
  • openai: Any OpenAI-compatible HTTP endpoint backend.
  • rits: IBM RITS (Remote Inference and Training Service) backend.