pip install mellea (no extra needed — uses the OpenAI client
already included), a valid AWS_BEARER_TOKEN_BEDROCK value.
Getting a Bedrock API key
Generate a long-term API key from the AWS console: us-east-1 Bedrock API keys Export it before running Mellea:Connecting with create_bedrock_mantle_backend
create_bedrock_mantle_backend returns an OpenAIBackend pointed at the Bedrock
Mantle endpoint. Pass it to MelleaSession as shown above. It reads AWS_BEARER_TOKEN_BEDROCK from the environment and checks
that the requested model is available in the target region before returning.
Specifying a region
The default region isus-east-1. Pass region to target a different region:
Using a model string directly
If theModelIdentifier for a Bedrock model is not in model_ids, pass the Bedrock
model ID string directly:
Bedrock via LiteLLM
An alternative path to Bedrock is theLiteLLMBackend,
which uses the standard AWS credentials chain (IAM roles, ~/.aws/credentials,
environment variables):
bedrock/converse/<bedrock-model-id>.
See the LiteLLM documentation for
available model IDs and credential setup.
Full example: docs/examples/bedrock/bedrock_openai_example.py
Troubleshooting
AWS_BEARER_TOKEN_BEDROCK not set:
region to create_bedrock_mantle_backend.
Vision support
Bedrock models accessed via the Mantle endpoint use theOpenAIBackend under the hood,
so vision-capable models (e.g., amazon.nova-pro-v1:0) support image input via
images=[...]. Pass a PIL image or an ImageBlock to
instruct() or chat(). See Use Images and Vision Models.
See also: Backends and Configuration