Skip to content

LLM Providers

Agentomics-ML supports multiple LLM providers out of the box.

Supported Providers

Provider Environment Variable Models
OpenRouter OPENROUTER_API_KEY 100+ models
OpenAI OPENAI_API_KEY Use --list-models to see available models
Ollama Local setup Local models

OpenRouter

Recommended for beginners - Access to 100+ models with one API key.

Setup

export OPENROUTER_API_KEY="sk-or-v1-xxxxxxxxxxxx"
./run.sh

Available Models

./run.sh --list-models

Model availability depends on your provider and API plan. Use ./run.sh --list-models to see what is available.

Provisioning Key

For temporary access without your own key:

./run.sh --use-provisioning-key

This requires PROVISIONING_OPENROUTER_API_KEY in your .env.


OpenAI

Direct access to OpenAI models.

Setup

export OPENAI_API_KEY="sk-xxxxxxxxxxxx"
./run.sh

Available Models

Use ./run.sh --list-models to see what your API key can access.


Ollama (Local Models)

Run models locally for privacy or offline use.

Requirements

  1. Install Ollama
  2. Pull a model: ollama pull <model-name>

Run with:

./run.sh --ollama

Docker mode connects to the Ollama base URL defined in src/utils/providers/configured_providers.yaml (default: http://host.docker.internal:11434/v1). Ensure your Ollama server is reachable from the host at :11434.

Local Mode

For local mode, set the Ollama base URL in src/utils/providers/configured_providers.yaml to http://localhost:11434/v1, then run:

./run.sh --local

Run ollama list to see available models.


Custom Providers

Add custom providers in src/utils/providers/configured_providers.yaml:

providers:
  - name: "MyProvider"
    base_url: "https://api.myprovider.com/v1"
    apikey: "${MY_PROVIDER_API_KEY}"

Then set the API key:

export MY_PROVIDER_API_KEY="your-key"

For custom providers, use --model explicitly:

./run.sh --model my-custom-model


Provider Selection

When multiple providers are configured, they're all available. Use --list-models to see all options:

./run.sh --list-models

The interactive mode groups models by provider for easy selection.


Model Recommendations

Use Case Recommended Model
Default Use --list-models to pick
Privacy/Offline Ollama local models

Troubleshooting

"API key not found"

Ensure your key is set:

echo $OPENROUTER_API_KEY  # Should show your key

"Model not available"

Check available models:

./run.sh --list-models

"Rate limit exceeded"

  • Wait and retry
  • Use a different provider
  • Check your API plan limits

Ollama connection refused

Ensure Ollama is running:

ollama list  # Should show pulled models

For Docker mode, verify that host.docker.internal:11434 is reachable from containers (run with ./run.sh --ollama).