Skip to main content

Classes

CLASS LiteLLMBackend

A generic LiteLLM compatible backend.
Methods:

FUNC generate_from_context

generate_from_context(self, action: Component[C] | CBlock, ctx: Context) -> tuple[ModelOutputThunk[C], Context]
See generate_from_chat_context.

FUNC processing

processing(self, mot: ModelOutputThunk, chunk: litellm.ModelResponse | litellm.ModelResponseStream)
Called during generation to add information from a single ModelResponse or a chunk / ModelResponseStream to the ModelOutputThunk. For LiteLLM, tool call parsing is handled in the post processing step.

FUNC post_processing

post_processing(self, mot: ModelOutputThunk, conversation: list[dict], tools: dict[str, Callable], thinking, _format)
Called when generation is done.

FUNC generate_from_raw

generate_from_raw(self, actions: list[Component[C]], ctx: Context) -> list[ModelOutputThunk[C]]

FUNC generate_from_raw

generate_from_raw(self, actions: list[Component[C] | CBlock], ctx: Context) -> list[ModelOutputThunk[C | str]]

FUNC generate_from_raw

generate_from_raw(self, actions: Sequence[Component[C] | CBlock], ctx: Context) -> list[ModelOutputThunk]
Generate using the completions api. Gives the input provided to the model without templating.