Skip to main content

Docker Deployment

Comprehensive guide for deploying Actyze using Docker Compose for local development and testing.

Overview

The Docker Compose setup provides a simplified deployment for:

  • Local development and testing
  • Quick evaluation and prototyping
  • Single-machine deployments
  • CI/CD testing environments

Image Source:

  • For local development: Builds images from source code (requires cloning the repository)
  • For testing/evaluation: Can pull pre-built images from Docker Hub (optional)

For production deployments with high availability and scalability, use Helm charts which always pull the latest images from Docker Hub.

Architecture

┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│ Frontend │ │ Nexus API │ │ PostgreSQL │
│ (nginx:3000) │───▶│ (FastAPI:8000) │───▶│ (postgres:5432)│
└─────────────────┘ └──────────────────┘ └─────────────────┘


┌──────────────────┐
│ Schema Service │
│ (FAISS:8001) │
└──────────────────┘


┌──────────────────┐
│ External Trino │
│ (Optional) │
└──────────────────┘

Prerequisites

  • Docker Engine 20.10+
  • Docker Compose 2.0+
  • 8GB+ RAM available for Docker
  • Internet connection for LLM API access

Quick Start

1. Clone Repository

git clone https://github.com/actyze/dashboard-docker.git
cd dashboard

2. Configure Environment

# Copy environment template
cp docker/env.example docker/.env

# Edit configuration
nano docker/.env

3. Add AI Provider API Key

Edit .env and add your AI provider's API key:

# AI Provider Configuration (choose one)
ANTHROPIC_API_KEY=your-api-key-here
EXTERNAL_LLM_MODEL=claude-sonnet-4-20250514

# Or use another provider:
# OPENAI_API_KEY=your-key
# EXTERNAL_LLM_MODEL=gpt-4o

See:

4. Start Services

# Using the start script
./docker/start.sh

# Or use Docker Compose directly
cd docker
docker-compose up -d

5. Access Actyze

Configuration

Environment Variables

The .env file contains all configuration:

# ============================================
# AI Provider Configuration
# ============================================
# Choose your AI provider (just set the API key and model)
PERPLEXITY_API_KEY=your-api-key-here
EXTERNAL_LLM_MODEL=perplexity/sonar-reasoning-pro

# Or use another provider:
# ANTHROPIC_API_KEY=sk-ant-xxxxx
# EXTERNAL_LLM_MODEL=claude-sonnet-4-20250514

# ============================================
# Database Configuration
# ============================================
POSTGRES_PASSWORD=dashboard_password
POSTGRES_USER=dashboard_user
POSTGRES_DB=dashboard
POSTGRES_HOST=postgres
POSTGRES_PORT=5432

# ============================================
# Trino Configuration (External)
# ============================================
TRINO_HOST=your-trino-host.example.com
TRINO_PORT=443
TRINO_USER=your-username
TRINO_PASSWORD=your-password
TRINO_SSL=true
TRINO_SSL_VERIFY=false # Set to false for self-signed certificates
TRINO_CATALOG=your-catalog
TRINO_SCHEMA=your-schema

# ============================================
# Service Configuration
# ============================================
LOG_LEVEL=INFO
DEBUG=false

LLM Providers

Actyze supports multiple LLM providers. Configure by changing these variables:

Anthropic Claude (Recommended):

ANTHROPIC_API_KEY=sk-ant-xxxxx
EXTERNAL_LLM_MODEL=claude-sonnet-4-20250514

OpenAI:

OPENAI_API_KEY=sk-xxxxx
EXTERNAL_LLM_MODEL=gpt-4o

Google Gemini:

GEMINI_API_KEY=your-key
EXTERNAL_LLM_MODEL=gemini/gemini-pro

Groq (Free tier):

GROQ_API_KEY=gsk_xxxxx
EXTERNAL_LLM_MODEL=groq/llama-3.3-70b-versatile

Enterprise Gateway:

EXTERNAL_LLM_MODE=openai-compatible
EXTERNAL_LLM_BASE_URL=https://llm-gateway.company.com/v1/chat/completions
EXTERNAL_LLM_API_KEY=your-enterprise-token
EXTERNAL_LLM_MODEL=your-internal-model

See: LLM Provider Configuration for complete list.

External Databases

To use external PostgreSQL or Trino, update .env:

# External PostgreSQL
POSTGRES_HOST=external-postgres.example.com
POSTGRES_PORT=5432
POSTGRES_USER=your-username
POSTGRES_PASSWORD=your-password

# External Trino
TRINO_HOST=external-trino.example.com
TRINO_PORT=443
TRINO_USER=your-username
TRINO_PASSWORD=your-password
TRINO_SSL=true
TRINO_SSL_VERIFY=false # Set to false for self-signed certificates

Then start with external profile:

./docker/start.sh --profile external

Operational Configuration (Local Development)

Docker Compose uses sensible defaults optimized for local development. Only cache settings are typically configured.

Cache Configuration

The only operational settings typically configured for local Docker development are cache-related. Configure in docker/.env:

# Cache Configuration (Optional - defaults shown)
CACHE_ENABLED=true
CACHE_QUERY_MAX_SIZE=100 # Number of query results to cache
CACHE_QUERY_TTL=1800 # 30 minutes (in seconds)

Default Cache Behavior:

  • Query Cache: SQL query results (default: 100 entries, 30min TTL)
  • LLM Cache: LLM API responses (default: 200 entries, 2hr TTL)
  • Schema Cache: FAISS recommendations (default: 500 entries, 30min TTL)
  • Metadata Cache: Schema metadata (default: 200 entries, 10min TTL)

For local development, these defaults are sufficient. LLM caching (2hr TTL) helps reduce API costs during development.

Application Defaults

The following settings use application defaults optimized for local development and are not configurable in Docker:

Timeouts:

  • SQL execution: 120 seconds (configurable via EXECUTE_TIMEOUT_SECONDS)
  • LLM API calls: 120 seconds (configurable via EXTERNAL_LLM_TIMEOUT)
  • Schema service calls: 30 seconds
  • Frontend HTTP timeout: EXECUTE_TIMEOUT_SECONDS + 30 seconds (auto-derived — prevents the browser from dropping long-running Trino queries)

Connection Pooling:

  • PostgreSQL pool size: 20 connections per Nexus instance
  • Max overflow: 30 additional connections during spikes
  • Pool timeout: 30 seconds

Retry Logic:

  • Schema service: 3 retries with 1s delay, exponential backoff
  • LLM service: 2 retries
  • Trino: Standard retry with backoff

Query Limits:

  • Default max results: 100 rows per query
  • Can be overridden per API request

These defaults work well for local development and testing. For production deployments with custom operational requirements (higher concurrency, longer timeouts, larger caches), use Helm charts where all settings are fully configurable.

Service Ports

ServicePortDescription
Frontend3000React application with nginx
Nexus API8000FastAPI backend service
Schema Service8001FAISS schema recommendations
PostgreSQL5432Database server

Management Commands

Start Services

# Start all services (builds images)
./docker/start.sh

# Start without rebuilding
./docker/start.sh --no-build

# Start and follow logs
./docker/start.sh --logs

Stop Services

# Stop services (preserve data)
./docker/stop.sh

# Stop and remove all data
./docker/stop.sh --clean

View Logs

# All services
docker-compose logs -f

# Specific service
docker-compose logs -f nexus
docker-compose logs -f frontend
docker-compose logs -f schema-service

Restart Services

# Restart all
docker-compose restart

# Restart specific service
docker-compose restart nexus

Check Status

# Service status
docker-compose ps

# Resource usage
docker stats

Development Workflow

1. Make Code Changes

Edit files in your IDE:

  • Frontend: frontend/src/
  • Backend: nexus/app/
  • Schema Service: schema-service/app/

2. Rebuild Service

# Rebuild specific service
docker-compose build nexus

# Restart the service
docker-compose up -d nexus

3. Test Changes

# View logs
docker-compose logs -f nexus

# Test API endpoint
curl http://localhost:8000/health

4. Debug Issues

# Access container shell
docker-compose exec nexus bash

# Check environment variables
docker-compose exec nexus env

# Check file system
docker-compose exec nexus ls -la /app

Docker Compose Profiles

The setup supports different deployment profiles:

Local Profile (Default)

Includes all services with local databases:

./docker/start.sh

Services:

  • Frontend
  • Nexus
  • Schema Service
  • PostgreSQL (local)
  • Trino (optional)

External Profile

Uses external databases only:

./docker/start.sh --profile external

Services:

  • Frontend
  • Nexus
  • Schema Service
  • External PostgreSQL (configured in .env)
  • External Trino (configured in .env)

Minimal Profile

Core services only (no schema service):

./docker/start.sh --profile minimal

Services:

  • Frontend
  • Nexus
  • PostgreSQL

Volume Management

Persistent Data

Data is stored in Docker volumes:

# List volumes
docker volume ls | grep dashboard

# Inspect volume
docker volume inspect dashboard_postgres_data

# Backup volume
docker run --rm -v dashboard_postgres_data:/data -v $(pwd):/backup \
alpine tar czf /backup/postgres-backup.tar.gz /data

Clean Up Volumes

# Remove all volumes (WARNING: deletes all data)
docker-compose down -v

# Remove specific volume
docker volume rm dashboard_postgres_data

Troubleshooting

Port Conflicts

# Check what's using the ports
lsof -i :3000
lsof -i :8000
lsof -i :5432

# Stop conflicting services
sudo lsof -ti:3000 | xargs kill -9

# Or change ports in docker-compose.yml

Memory Issues

# Check Docker memory usage
docker stats

# Check available memory
free -h

# Increase Docker Desktop memory:
# Docker Desktop → Settings → Resources → Memory → 8GB+

Database Connection Issues

# Check PostgreSQL logs
docker-compose logs postgres

# Verify database is running
docker-compose ps postgres

# Test connection
docker-compose exec postgres psql -U dashboard_user -d dashboard -c "SELECT 1"

# Reset database
docker-compose down -v
docker-compose up -d

LLM API Issues

# Check Nexus logs for API errors
docker-compose logs nexus | grep -i "llm\|error"

# Verify environment variables
docker-compose exec nexus env | grep EXTERNAL_LLM

# Test API key manually
curl -X POST https://api.perplexity.ai/chat/completions \
-H "Authorization: Bearer $PERPLEXITY_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "sonar-reasoning-pro", "messages": [{"role": "user", "content": "test"}]}'

Image Build Failures

# Clean Docker cache
docker system prune -a

# Rebuild without cache
docker-compose build --no-cache

# Check disk space
df -h

# Free up space
docker system df
docker system prune -a --volumes

Service Health Checks

# Check health status
docker-compose ps

# View health check logs
docker inspect dashboard-nexus | grep -A 10 Health

# Manual health check
curl http://localhost:8000/health
curl http://localhost:8001/health

Testing

Automated Tests

# Run test script
./scripts/test-docker-deployment.sh

# Test specific service
docker-compose exec nexus pytest
docker-compose exec frontend npm test

Manual Testing

# Test health endpoints
curl http://localhost:8000/health
curl http://localhost:8001/health

# Test SQL generation (requires auth token)
curl -X POST http://localhost:8000/api/generate-sql \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{
"nl_query": "Show all customers",
"conversation_history": []
}'

Performance Optimization

Resource Limits

Edit docker-compose.yml to set resource limits:

services:
nexus:
deploy:
resources:
limits:
cpus: '2'
memory: 4G
reservations:
cpus: '1'
memory: 2G

Build Optimization

# Use build cache
docker-compose build

# Parallel builds
docker-compose build --parallel

# Specific service
docker-compose build nexus

Upgrading

Update to Latest Version

# Pull latest code
git pull origin main

# Rebuild images
docker-compose build

# Restart services
docker-compose up -d

Database Migrations

# Run migrations
docker-compose exec nexus alembic upgrade head

# Or use migration script
./docker/migrate.sh

Production Considerations

Security

  • Use strong passwords for all services
  • Enable SSL/TLS for external communications
  • Configure proper authentication
  • Apply regular security updates for base images
  • Use secrets management (not .env files)

Performance

  • Use production-optimized Docker images
  • Configure resource limits appropriately
  • Set up monitoring and logging
  • Use external managed databases

Scalability

For production deployments requiring:

  • High availability
  • Horizontal scaling
  • Load balancing
  • Advanced networking

Use Helm charts instead: Helm Deployment Guide

Monitoring

Basic Monitoring

# Resource usage
docker stats

# Service health
docker-compose ps

# Application logs
docker-compose logs -f

Advanced Monitoring

For production, consider:

  • Prometheus + Grafana for metrics
  • ELK Stack for log aggregation
  • Health check endpoints monitoring
  • APM tools for performance

CI/CD Integration

GitHub Actions Example

name: Test Docker Deployment

on: [push, pull_request]

jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2

- name: Start services
run: |
cp docker/env.example docker/.env
./docker/start.sh

- name: Run tests
run: ./scripts/test-docker-deployment.sh

- name: Stop services
run: ./docker/stop.sh --clean

Migration to Kubernetes

The Docker Compose setup mirrors the Helm deployment:

  • Same environment variables
  • Same service architecture
  • Same external integrations
  • Same database schemas

Migrate to Kubernetes when ready:

Helm Deployment Guide

Additional Resources

Support

For issues and questions: