LLM Providers¶
Agentomics-ML supports multiple LLM providers out of the box.
Supported Providers¶
| Provider | Environment Variable | Models |
|---|---|---|
| OpenRouter | OPENROUTER_API_KEY |
100+ models |
| OpenAI | OPENAI_API_KEY |
Use --list-models to see available models |
| Ollama | Local setup | Local models |
OpenRouter¶
Recommended for beginners - Access to 100+ models with one API key.
Setup¶
Available Models¶
Model availability depends on your provider and API plan. Use ./run.sh --list-models
to see what is available.
Provisioning Key¶
For temporary access without your own key:
This requires PROVISIONING_OPENROUTER_API_KEY in your .env.
OpenAI¶
Direct access to OpenAI models.
Setup¶
Available Models¶
Use ./run.sh --list-models to see what your API key can access.
Ollama (Local Models)¶
Run models locally for privacy or offline use.
Requirements¶
- Install Ollama
- Pull a model:
ollama pull <model-name>
Docker Mode (Recommended)¶
Run with:
Docker mode connects to the Ollama base URL defined in
src/utils/providers/configured_providers.yaml
(default: http://host.docker.internal:11434/v1).
Ensure your Ollama server is reachable from the host at :11434.
Local Mode¶
For local mode, set the Ollama base URL in src/utils/providers/configured_providers.yaml
to http://localhost:11434/v1, then run:
Popular Models¶
Run ollama list to see available models.
Custom Providers¶
Add custom providers in src/utils/providers/configured_providers.yaml:
providers:
- name: "MyProvider"
base_url: "https://api.myprovider.com/v1"
apikey: "${MY_PROVIDER_API_KEY}"
Then set the API key:
For custom providers, use --model explicitly:
Provider Selection¶
When multiple providers are configured, they're all available. Use --list-models to see all options:
The interactive mode groups models by provider for easy selection.
Model Recommendations¶
| Use Case | Recommended Model |
|---|---|
| Default | Use --list-models to pick |
| Privacy/Offline | Ollama local models |
Troubleshooting¶
"API key not found"¶
Ensure your key is set:
"Model not available"¶
Check available models:
"Rate limit exceeded"¶
- Wait and retry
- Use a different provider
- Check your API plan limits
Ollama connection refused¶
Ensure Ollama is running:
For Docker mode, verify that host.docker.internal:11434 is reachable from
containers (run with ./run.sh --ollama).