Helm Setup
Deploy Actyze to Kubernetes using Helm charts for production environments.
Prerequisites
- Kubernetes cluster (v1.24+)
- Helm 3.x installed
kubectlconfigured to access your cluster- 4GB+ RAM per node
- Storage class for persistent volumes
Repository Structure
Helm charts are maintained in a separate repository:
helm-charts/
├── dashboard/
│ ├── Chart.yaml
│ ├── values.yaml # Main configuration
│ ├── values-secrets.yaml.template # Secrets template
│ ├── templates/ # Kubernetes manifests
│ ├── VALUES_README.md # Configuration reference
│ ├── LLM_PROVIDERS.md # LLM setup guide
│ └── MIGRATIONS_README.md # Database migrations
└── README.md
Installation Steps
1. Clone Helm Charts Repository
git clone https://github.com/actyze/helm-charts.git
cd helm-charts
2. Configure Secrets
# Copy secrets template
cp dashboard/values-secrets.yaml.template dashboard/values-secrets.yaml
# Edit with your credentials
nano dashboard/values-secrets.yaml
Add your LLM API key and database credentials:
secrets:
# External LLM API Key
externalLLM:
apiKey: "your-api-key-here" # Perplexity, OpenAI, Anthropic, etc.
# PostgreSQL Password
postgres:
password: "your-secure-password"
# Trino Credentials (if using external Trino)
trino:
user: "your-trino-username"
password: "your-trino-password"
3. Review Configuration
Check values.yaml for service configuration:
# Service toggles
services:
frontend:
enabled: true
nexus:
enabled: true
schemaService:
enabled: true
postgres:
enabled: true
trino:
enabled: true
# AI Provider Configuration
modelStrategy:
externalLLM:
enabled: true
model: "claude-sonnet-4-20250514" # or gpt-4o, gemini/gemini-pro, etc.
See: VALUES_README.md for all options.
Production Resource Configuration
For production deployments, configure resources based on your performance requirements and cluster size.
Available production configurations:
- Minimum:
values-production-optimized.yaml- Development/testing only (NOT for production)- ~2 CPUs, ~4Gi RAM
- Cluster: 1-2 nodes × 4 CPU × 8Gi RAM
- Recommended (Production-Grade): Default
values.yaml- No performance bottlenecks- ~12 CPUs, ~21Gi RAM
- Cluster: 4-5 nodes × 8 CPU × 16Gi RAM OR 3 nodes × 16 CPU × 32Gi RAM
- This is the recommended production baseline
- Enterprise: Custom configuration for maximum performance
- ~22 CPUs, ~38Gi RAM
- Cluster: 3 nodes × 16 CPU × 32Gi RAM OR 2 nodes × 32 CPU × 64Gi RAM
- For very large data, hundreds of concurrent users
Key Resource Highlights:
- Trino: 12Gi RAM, 6 CPUs (Recommended) | 24Gi RAM, 12 CPUs (Enterprise)
- Schema Service: 6Gi RAM, 4 CPUs (Recommended) | 12Gi RAM, 8 CPUs (Enterprise)
- Both are CPU and memory-intensive for optimal query performance
Operational Configuration: Production deployments also require proper configuration of:
- Cache: Query and LLM response caching to reduce load
- Timeouts: SQL execution, API calls, ingress timeouts
- Connection Pools: Database connection pooling for concurrency
- Rate Limiting: API protection and cost control
- Retry Logic: Transient failure handling
See Helm Deployment Guide for complete specifications, YAML examples, and cluster calculators.
4. Deploy to Kubernetes
helm install dashboard ./dashboard \
--namespace actyze \
--create-namespace \
--values dashboard/values.yaml \
--values dashboard/values-secrets.yaml \
--wait
5. Verify Deployment
# Check pod status
kubectl get pods -n actyze
# Check services
kubectl get svc -n actyze
# Check ingress (if enabled)
kubectl get ingress -n actyze
Expected output:
NAME READY STATUS RESTARTS AGE
dashboard-frontend-xxx 1/1 Running 0 2m
dashboard-nexus-xxx 1/1 Running 0 2m
dashboard-schema-service-xxx 1/1 Running 0 2m
dashboard-postgres-0 1/1 Running 0 2m
dashboard-trino-xxx 1/1 Running 0 2m
Access Actyze
Local Access (Port Forwarding)
For local testing without Ingress:
# Forward frontend port (internal port 80 → local port 3000)
kubectl port-forward -n actyze svc/dashboard-frontend 3000:80
# Forward API port (internal port 8002 → local port 8002)
kubectl port-forward -n actyze svc/dashboard-nexus 8002:8002
Open http://localhost:3000 for the UI and http://localhost:8002/docs for the API.
Production Access (Ingress)
Configure ingress in values.yaml:
ingress:
enabled: true
className: "nginx"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
hosts:
- host: analytics.yourcompany.com
paths:
- path: /
pathType: Prefix
service: frontend
- path: /api
pathType: Prefix
service: nexus
tls:
- secretName: dashboard-tls
hosts:
- analytics.yourcompany.com
Then upgrade:
helm upgrade dashboard ./dashboard \
-f dashboard/values.yaml \
-f dashboard/values-secrets.yaml \
-n actyze
Configuration
AI Provider Setup
Actyze supports 100+ AI providers. Simple 2-line configuration in values.yaml:
Anthropic Claude (Recommended):
modelStrategy:
externalLLM:
enabled: true
model: "claude-sonnet-4-20250514"
# values-secrets.yaml
secrets:
externalLLM:
apiKey: "sk-ant-xxxxx"
OpenAI:
modelStrategy:
externalLLM:
enabled: true
model: "gpt-4o"
# values-secrets.yaml
secrets:
externalLLM:
apiKey: "sk-xxxxx"
Enterprise Gateway:
modelStrategy:
externalLLM:
enabled: true
mode: "openai-compatible"
model: "your-internal-model"
baseUrl: "https://llm-gateway.company.com/v1/chat/completions"
# values-secrets.yaml
secrets:
externalLLM:
apiKey: "your-enterprise-token"
See:
- AI Providers - All 100+ supported providers
- LLM Provider Configuration - Detailed setup examples
Resource Configuration
Adjust resources in values.yaml:
services:
nexus:
resources:
requests:
memory: "2Gi"
cpu: "1000m"
limits:
memory: "4Gi"
cpu: "2000m"
frontend:
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
Scaling Configuration
Enable horizontal pod autoscaling:
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
Storage Configuration
Configure persistent volumes:
persistence:
postgres:
enabled: true
storageClass: "standard"
size: "10Gi"
schemaService:
enabled: true
storageClass: "standard"
size: "5Gi"
Management Commands
Upgrade Deployment
# Pull latest chart changes
git pull origin main
# Upgrade release
helm upgrade dashboard ./dashboard \
-f dashboard/values.yaml \
-f dashboard/values-secrets.yaml \
-n actyze
Rollback Deployment
# View release history
helm history dashboard -n actyze
# Rollback to previous version
helm rollback dashboard -n actyze
# Rollback to specific revision
helm rollback dashboard 2 -n actyze
Uninstall
# Uninstall release (keeps PVCs)
helm uninstall dashboard -n actyze
# Delete namespace and all resources
kubectl delete namespace actyze
View Configuration
# View current values
helm get values dashboard -n actyze
# View all values (including defaults)
helm get values dashboard -n actyze --all
Troubleshooting
Pods Not Starting
# Check pod status
kubectl get pods -n actyze
# Describe pod for events
kubectl describe pod <pod-name> -n actyze
# Check logs
kubectl logs <pod-name> -n actyze
Common issues:
- ImagePullBackOff: Check image names and pull secrets
- CrashLoopBackOff: Check logs for application errors
- Pending: Check resource availability and storage class
Database Connection Issues
# Check PostgreSQL pod
kubectl logs -n actyze deployment/dashboard-postgres
# Verify secret
kubectl get secret dashboard-secrets -n actyze -o yaml
# Test connection from Nexus pod
kubectl exec -it -n actyze deployment/dashboard-nexus -- \
psql -h dashboard-postgres -U dashboard_user -d dashboard
LLM API Issues
# Check Nexus logs for API errors
kubectl logs -n actyze deployment/dashboard-nexus | grep -i "llm\|error"
# Verify environment variables
kubectl exec -n actyze deployment/dashboard-nexus -- env | grep EXTERNAL_LLM
# Check secret
kubectl get secret dashboard-secrets -n actyze -o jsonpath='{.data.EXTERNAL_LLM_API_KEY}' | base64 -d
Ingress Issues
# Check ingress status
kubectl get ingress -n actyze
kubectl describe ingress dashboard-ingress -n actyze
# Check ingress controller
kubectl get pods -n ingress-nginx
# Test DNS resolution
nslookup analytics.yourcompany.com
Storage Issues
# Check PVCs
kubectl get pvc -n actyze
# Check storage class
kubectl get storageclass
# Describe PVC for events
kubectl describe pvc <pvc-name> -n actyze
Monitoring
Check Service Health
# Port-forward and test health endpoint
kubectl port-forward -n actyze svc/dashboard-nexus 8000:8000
curl http://localhost:8000/health
View Logs
# All pods
kubectl logs -n actyze -l app.kubernetes.io/name=dashboard --tail=100
# Specific service
kubectl logs -n actyze deployment/dashboard-nexus --tail=100 -f
Resource Usage
# Pod resource usage
kubectl top pods -n actyze
# Node resource usage
kubectl top nodes
Production Best Practices
High Availability
services:
nexus:
replicas: 3
frontend:
replicas: 2
schemaService:
replicas: 2
# Enable pod disruption budgets
podDisruptionBudget:
enabled: true
minAvailable: 1
Security
# Use Kubernetes secrets
secrets:
externalLLM:
apiKey: "use-sealed-secrets-or-external-secrets-operator"
# Enable network policies
networkPolicy:
enabled: true
# Use service accounts
serviceAccount:
create: true
name: "dashboard-sa"
Monitoring & Logging
# Enable Prometheus metrics
monitoring:
enabled: true
serviceMonitor:
enabled: true
# Configure logging
logging:
level: "INFO"
format: "json"
Upgrading
Update Helm Charts
# Pull latest changes
cd helm-charts
git pull origin main
# Review changes
git log --oneline -10
# Upgrade deployment
helm upgrade dashboard ./dashboard \
-f dashboard/values.yaml \
-f dashboard/values-secrets.yaml \
-n actyze
Database Migrations
# Migrations run automatically via init job
# Check migration status
kubectl logs -n actyze job/dashboard-db-migration
# Manual migration (if needed)
kubectl exec -it -n actyze deployment/dashboard-nexus -- \
alembic upgrade head
Next Steps
- Configure LLM Provider - Set up your AI model
- Connect Database - Link to Trino
- Monitoring Setup - Set up observability
- Quick Start Guide - Learn how to use Actyze
Additional Resources
- Helm Deployment Guide - Comprehensive Helm documentation
- VALUES_README.md - All configuration options
- LLM_PROVIDERS.md - LLM setup guide
- GitHub Repository - Helm charts source