Docker Quick Start
Introduction
📌 Version Information: This guide uses NeuronDB 2.0.0 from the main branch. For production deployments requiring the stable 1.0.0 release, checkout the REL1_STABLE branch before running Docker Compose.
This guide gets the complete NeuronDB ecosystem running in under 5 minutes using Docker Compose. The ecosystem includes:
- NeuronDB - PostgreSQL extension with vector search, ML inference, and GPU acceleration
- NeuronAgent - REST API and WebSocket agent runtime with long-term memory
- NeuronMCP - Model Context Protocol server with 100+ tools for MCP-compatible clients
- NeuronDesktop - Unified web interface for managing all components
Why Docker? Docker provides the easiest and most consistent setup, with automatic networking, configuration, and GPU support across platforms.
Prerequisites
Before starting, verify you have:
- Docker 20.10+ and Docker Compose 2.0+ installed
- 4GB+ RAM available (8GB for better performance)
- Ports available: 5433 (PostgreSQL), 8080 (NeuronAgent), 8081 (NeuronDesktop API), 3000 (NeuronDesktop UI)
- Optional: NVIDIA Docker runtime (CUDA), ROCm drivers (AMD), or Metal support (macOS/Apple Silicon)
Verify Docker installation
docker --version
docker compose versionQuick Start (5 minutes)
Start the complete NeuronDB ecosystem with a single command:
📌 Branch Selection Guide
NeuronDB has three branches with different versions. Choose based on your needs:
| Branch | Version | Status | Use When |
|---|---|---|---|
main | 3.0.0-devel | Latest | New projects, development, latest features (default) |
REL2_STABLE | 2.0.0 | Stable | Production, stable v2.0 features |
REL1_STABLE | 1.0.0 | Stable | Production, maximum stability required |
Recommendation: Use main (version 2.0.0) for most users. Use REL1_STABLE (version 1.0.0) only if you specifically need the stable release branch.
Step 1: Clone Repository with Correct Branch
Clone main branch (version 2.0.0)
# Clone main branch for version 2.0.0 (latest features, default)
git clone https://github.com/neurondb-ai/neurondb.git
cd neurondb
# Note: Default clone gets main branch with version 2.0.0Clone REL1_STABLE branch (version 1.0.0, stable)
# Clone REL1_STABLE branch for version 1.0.0 (stable production release)
git clone -b REL1_STABLE https://github.com/neurondb-ai/neurondb.git
cd neurondb
# Note: REL1_STABLE branch provides version 1.0.0 (stable release)Step 3: Pull Pre-built Images
Pull images from GitHub Container Registry
# Pull latest pre-built images from GHCR
docker compose pull
# Or pull specific version
# For version 2.0 (main branch): docker pull ghcr.io/neurondb/neurondb-postgres:v2.0.0-pg17-cpu
# For version 1.0 (REL1_STABLE branch): docker pull ghcr.io/neurondb/neurondb-postgres:v1.0.0-pg17-cpu
# Pull other components
docker pull ghcr.io/neurondb/neuronagent:v2.0.0
docker pull ghcr.io/neurondb/neurondb-mcp:v2.0.0
docker pull ghcr.io/neurondb/neurondesktop-api:v2.0.0
docker pull ghcr.io/neurondb/neurondesktop-frontend:v2.0.0Pre-built Docker images are available from GitHub Container Registry (GHCR). This is faster than building from source and ensures you're using tested, production-ready images.
📦 Image Versioning: Docker images are tagged by version. Version 2.0.0 images are built from the main branch, while version 1.0.0 images are built from the REL1_STABLE branch.
Step 4: Choose Your Components
You can start all services together, or select specific components based on your needs:
| Setup | Command | Components |
|---|---|---|
| NeuronDB only | docker compose up -d neurondb | PostgreSQL extension only |
| NeuronDB + NeuronMCP | docker compose up -d neurondb neuronmcp | Extension + MCP server |
| NeuronDB + NeuronAgent | docker compose up -d neurondb neuronagent | Extension + Agent runtime |
| Full stack | docker compose up -d | All components |
Start full ecosystem (CPU profile)
# Start all services with CPU profile (default)
docker compose up -d💡 Component Independence: All components run independently. The root docker-compose.yml starts everything together for convenience, but you can run individual services as needed.
Starting all services will:
- Use pre-built images from GHCR (or build from source if not pulled)
- Start PostgreSQL with NeuronDB extension
- Start NeuronAgent (REST API server on port 8080)
- Start NeuronMCP (MCP protocol server)
- Start NeuronDesktop (web UI on port 3000, API on 8081)
- Configure networking between all components
💡 Using Pre-built Images: Images are published to GHCR starting with v2.0.0. Available variants include PostgreSQL 16/17/18 and GPU profiles (CPU, CUDA, ROCm, Metal). See GHCR packages for all available images.
Step 5: Check Service Status
Verify all services are running
docker compose psYou should see five services running with "healthy" status:
neurondb-cpu- PostgreSQL with NeuronDB extensionneuronagent- REST API serverneurondb-mcp- MCP protocol serverneurondesk-api- NeuronDesktop API serverneurondesk-frontend- NeuronDesktop web interface
Wait 30-60 seconds for all services to initialize and show "healthy" status.
Verify Services
Run these quick verification commands to confirm everything is working:
Test 1: NeuronDB Extension
Verify NeuronDB extension
docker compose exec neurondb psql -U neurondb -d neurondb -c "SELECT neurondb.version();"Expected output:
2.0if usingmainbranch (version 2.0.0)1.0if usingREL1_STABLEbranch (version 1.0.0)
Test 2: NeuronAgent REST API
Check NeuronAgent health
curl http://localhost:8080/healthExpected output: {"status":"ok"}
Test 3: NeuronDesktop API
Check NeuronDesktop API
curl http://localhost:8081/healthExpected output: JSON response with status information
Test 4: First Vector Query
Create extension and test vector search
-- Connect to database
docker compose exec -T neurondb psql -U neurondb -d neurondb <<EOF
-- Create extension
CREATE EXTENSION IF NOT EXISTS neurondb;
-- Create a test table
CREATE TABLE IF NOT EXISTS documents (
id SERIAL PRIMARY KEY,
content TEXT,
embedding vector(1536)
);
-- Insert sample document
INSERT INTO documents (content, embedding)
VALUES ('Hello, NeuronDB!', '[0.1, 0.2, 0.3]'::vector)
ON CONFLICT DO NOTHING;
-- Verify data
SELECT id, content FROM documents;
EOFGPU Profiles
The Docker Compose setup supports multiple GPU profiles for accelerated operations. Choose the profile that matches your hardware:
CPU Profile (Default)
CPU-only setup
docker compose up -dUses port 5433 for PostgreSQL.
CUDA Profile (NVIDIA GPU)
CUDA GPU acceleration
docker compose --profile cuda up -dRequires NVIDIA Docker runtime. Uses port 5434 for PostgreSQL. See CUDA GPU Support for setup details.
ROCm Profile (AMD GPU)
ROCm GPU acceleration
docker compose --profile rocm up -dRequires ROCm drivers. Uses port 5435 for PostgreSQL. See ROCm GPU Support for setup details.
Metal Profile (Apple Silicon)
Metal GPU acceleration (macOS)
docker compose --profile metal up -dFor macOS with Apple Silicon (M1/M2/M3). Uses port 5436 for PostgreSQL. See Metal GPU Support for setup details.
Note: You can run multiple profiles simultaneously on different ports. For example, run both CPU and CUDA profiles side-by-side for testing.
Service URLs & Access
After starting services, access them at:
| Service | How to reach it | Default credentials | Notes |
|---|---|---|---|
| NeuronDB (PostgreSQL) | postgresql://neurondb:neurondb@localhost:5433/neurondb | User: neurondb, Password: neurondb ⚠️ Dev only | Container: neurondb-cpu, Service: neurondb |
| NeuronAgent | http://localhost:8080/health | Health: no auth. API: API key required | Container: neuronagent, Service: neuronagent |
| NeuronDesktop UI | http://localhost:3000 | No auth (development mode) | Container: neurondesk-frontend, Service: neurondesk-frontend |
| NeuronDesktop API | http://localhost:8081/health | Health: no auth. API: varies by config | Container: neurondesk-api, Service: neurondesk-api |
| NeuronMCP | stdio (JSON-RPC 2.0) | N/A (MCP protocol) | Container: neurondb-mcp, Service: neuronmcp. No HTTP port. |
⚠️ Production Security Warning: The default credentials shown above are for development only. Always use strong, unique passwords in production. Set POSTGRES_PASSWORD and other secrets via environment variables or a .env file.
Common Commands
Service management
# Stop all services (keep data)
docker compose down
# Stop and remove all data volumes
docker compose down -v
# View logs from all services
docker compose logs -f
# View logs from specific service
docker compose logs -f neurondb
docker compose logs -f neuronagent
# Restart a specific service
docker compose restart neurondbNext Steps
Now that your ecosystem is running, use these resources:
- Quick Start Guide - Create your first vector table, generate embeddings, and run semantic search queries
- Kubernetes Deployment - Deploy NeuronDB on Kubernetes with Helm charts for production-ready, cloud-native deployments
- Observability Stack - Set up Prometheus, Grafana, and Jaeger for complete monitoring and distributed tracing
- Operational Scripts - Use automation scripts for Docker, database, setup, health checks, and monitoring
- NeuronAgent Documentation - Build AI agents with REST API, WebSocket, and long-term memory
- NeuronMCP Documentation - Use 100+ MCP tools with Claude Desktop and other MCP clients
- NeuronDesktop Documentation - Manage your ecosystem through the unified web interface
- Vector Indexing - Configure HNSW, IVF, and quantization for production-scale search
- RAG Pipelines - Build retrieval augmented generation workflows in PostgreSQL
Troubleshooting
Having issues? Check these common problems:
Services Won't Start
Check logs
docker compose logs neurondb
docker compose logs neuronagent
docker compose logs neurondb-mcpPort Already in Use
If ports 5433, 8080, 8081, or 3000 are in use, modify docker-compose.yml or stop conflicting services.
Out of Memory
Ensure Docker has at least 4GB RAM allocated (8GB+ for better performance). Check Docker Desktop → Settings → Resources.
GPU Not Detected
For CUDA: Verify NVIDIA Docker runtime is installed. For ROCm: Check that ROCm drivers are available. See GPU Documentation for detailed setup instructions.
For more help, see the Troubleshooting Guide or check service logs with docker compose logs.