Installation¶
Agentomics-ML supports multiple deployment options. Choose the one that best fits your needs.
Prerequisites¶
All installation methods require:
Docker with Pre-built Images¶
Fastest setup - Downloads pre-built images from Docker Hub.
Requirements¶
- Docker installed and running
Setup¶
# Create a .env file (required for Docker mode)
cp .env.example .env
# Edit .env and set at least one API key
# Run with pre-built images
./run.sh --pull-images
The images will be downloaded automatically on first run.
Docker with Local Build¶
Default mode - Builds Docker images locally.
Requirements¶
- Docker installed and running
Setup¶
# Create a .env file (required for Docker mode)
cp .env.example .env
# Edit .env and set at least one API key
# Run (will prompt to build images on first run)
./run.sh
On first run, you'll be prompted to build the Docker images. This takes a few minutes but only needs to be done once.
Local Mode (No Docker)¶
For development or Google Colab - Runs directly with conda.
Security Notice
Local mode executes code without containerization. Only use in secure environments like Google Colab or your own isolated container.
Requirements¶
- Conda installed
Setup¶
# Set your API key (export or .env)
export OPENROUTER_API_KEY="your-key-here"
# Run in local mode
./run.sh --local
Conda environments will be created automatically.
Google Colab¶
The easiest way to try Agentomics-ML without any local setup.
The Colab notebook uses local mode automatically.
Ollama (Local LLMs)¶
Run with local models using Ollama for privacy or offline use.
Requirements¶
- Ollama installed and running
- Docker (recommended) or conda
Docker Mode Setup¶
- Ensure Ollama listens on the host (e.g.,
0.0.0.0:11434). -
Run with the
--ollamaflag:
Docker mode connects to the URL configured in src/utils/providers/configured_providers.yaml
(default: http://host.docker.internal:11434/v1).
Local Mode Setup¶
For local mode, set the Ollama base URL in src/utils/providers/configured_providers.yaml
to http://localhost:11434/v1, then run:
CPU-Only Mode¶
Disable GPU acceleration:
Works with both Docker and local modes.
Comparison Table¶
| Mode | Docker Required | Build Time | Security | Best For |
|---|---|---|---|---|
| Docker + Pull Images | Yes | None | High | Quick start |
| Docker + Local Build | Yes | ~5-10 min | High | Custom builds |
| Local Mode | No | ~2 min | Low | Development, Colab |
| Google Colab | No | None | Medium | Trying it out |
| Ollama | Depends | Varies | High | Privacy, offline |
Next Steps¶
- Running the Agent - Learn all run.sh options
- LLM Providers - Configure different LLM providers
- GPU Settings - NVIDIA GPU setup