Skip to main content

Database Connectors

Actyze connects to multiple data sources through Trino connectors, enabling you to query across relational databases, cloud warehouses, NoSQL stores, and data lakes from a single interface.

Overview

Trino connectors allow Actyze to:

  • Query data across multiple databases simultaneously
  • Join data from different sources
  • Apply consistent SQL syntax regardless of underlying database
  • Leverage database-specific optimizations

Supported Connectors

Relational Databases

DatabaseConnectorUse Case
PostgreSQLpostgresqlOperational databases, analytics
MySQLmysqlWeb applications, transactional data
OracleoracleEnterprise applications, ERP systems
SQL ServersqlserverMicrosoft stack, business applications
MariaDBmariadbMySQL alternative, cloud deployments

Cloud Data Warehouses

PlatformConnectorUse Case
SnowflakesnowflakeCloud data warehouse, analytics
Databricksdelta_lakeLakehouse architecture, ML workloads
BigQuerybigqueryGoogle Cloud analytics, petabyte-scale
RedshiftredshiftAWS data warehouse, BI workloads
SynapsesqlserverAzure analytics, enterprise DW

NoSQL Databases

DatabaseConnectorUse Case
MongoDBmongodbDocument store, JSON data
CassandracassandraDistributed NoSQL, high availability
ElasticsearchelasticsearchSearch, log analytics, full-text
RedisredisCaching, session store, real-time

Data Lakes

TechnologyConnectorUse Case
IcebergicebergModern data lake format, ACID
Delta Lakedelta_lakeLakehouse, time travel, streaming
HudihudiIncremental processing, CDC
HivehiveLegacy Hadoop, S3 access

Configuration

Docker Compose

Trino connectors are configured through template files in docker/trino/catalog-templates/.

Active Connectors (Default)

PostgreSQL (postgres.properties.tpl):

connector.name=postgresql
connection-url=jdbc:postgresql://${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
connection-user=${POSTGRES_USER}
connection-password=${POSTGRES_PASSWORD}

Memory (memory.properties.tpl):

connector.name=memory

TPCH (tpch.properties.tpl):

connector.name=tpch
tpch.splits-per-node=4

Adding New Connectors

Step 1: Create Template File

Example for MySQL (mysql.properties.tpl):

connector.name=mysql
connection-url=jdbc:mysql://${MYSQL_HOST}:${MYSQL_PORT}/${MYSQL_DB}
connection-user=${MYSQL_USER}
connection-password=${MYSQL_PASSWORD}

Step 2: Add Environment Variables

Edit docker/.env:

MYSQL_HOST=mysql-server.example.com
MYSQL_PORT=3306
MYSQL_DB=production
MYSQL_USER=actyze_user
MYSQL_PASSWORD=your-secure-password

Step 3: Rebuild Trino

cd docker
docker-compose build trino
docker-compose up -d trino

Helm/Kubernetes

Connectors are configured in values.yaml under the catalogs section.

Example: Add Snowflake

Edit values.yaml:

services:
trino:
enabled: true
catalogs:
postgres:
connector.name: "postgresql"
connection-url: "jdbc:postgresql://dashboard-postgres:5432/dashboard"
connection-user: "nexus_service"
connection-password: "${ENV:POSTGRES_PASSWORD}"

snowflake:
connector.name: "snowflake"
snowflake.account: "your-account.us-east-1"
snowflake.user: "actyze_user"
snowflake.password: "${ENV:SNOWFLAKE_PASSWORD}"
snowflake.database: "ANALYTICS"
snowflake.warehouse: "COMPUTE_WH"
snowflake.role: "ANALYST"

Edit values-secrets.yaml:

secrets:
trino:
snowflakePassword: "your-snowflake-password"

Connector Examples

MySQL

Docker (mysql.properties.tpl):

connector.name=mysql
connection-url=jdbc:mysql://${MYSQL_HOST}:${MYSQL_PORT}/${MYSQL_DB}
connection-user=${MYSQL_USER}
connection-password=${MYSQL_PASSWORD}
jdbc-url-parameters=useSSL=true&requireSSL=true

Helm (values.yaml):

catalogs:
mysql:
connector.name: "mysql"
connection-url: "jdbc:mysql://mysql.example.com:3306/production"
connection-user: "actyze_user"
connection-password: "${ENV:MYSQL_PASSWORD}"

MongoDB

Docker (mongodb.properties.tpl):

connector.name=mongodb
mongodb.connection-url=mongodb://${MONGO_HOST}:${MONGO_PORT}
mongodb.credentials=${MONGO_USER}:${MONGO_PASSWORD}@admin

Helm (values.yaml):

catalogs:
mongodb:
connector.name: "mongodb"
mongodb.connection-url: "mongodb+srv://cluster.mongodb.net"
mongodb.credentials: "${ENV:MONGO_USER}:${ENV:MONGO_PASSWORD}@admin"

Databricks

Docker (databricks.properties.tpl):

connector.name=delta_lake
delta.databricks.server-hostname=${DATABRICKS_HOST}
delta.databricks.http-path=${DATABRICKS_HTTP_PATH}
delta.databricks.catalog=main
delta.databricks.auth-type=pat
delta.databricks.pat-token=${DATABRICKS_TOKEN}

Helm (values.yaml):

catalogs:
databricks:
connector.name: "delta_lake"
delta.databricks.server-hostname: "your-workspace.cloud.databricks.com"
delta.databricks.http-path: "/sql/1.0/warehouses/your-warehouse-id"
delta.databricks.catalog: "main"
delta.databricks.auth-type: "pat"
delta.databricks.pat-token: "${ENV:DATABRICKS_TOKEN}"

Snowflake

Docker (snowflake.properties.tpl):

connector.name=snowflake
snowflake.account=${SNOWFLAKE_ACCOUNT}
snowflake.user=${SNOWFLAKE_USER}
snowflake.password=${SNOWFLAKE_PASSWORD}
snowflake.database=${SNOWFLAKE_DATABASE}
snowflake.warehouse=${SNOWFLAKE_WAREHOUSE}
snowflake.role=${SNOWFLAKE_ROLE}

Helm (values.yaml):

catalogs:
snowflake:
connector.name: "snowflake"
snowflake.account: "your-account.us-east-1"
snowflake.user: "actyze_user"
snowflake.password: "${ENV:SNOWFLAKE_PASSWORD}"
snowflake.database: "ANALYTICS"
snowflake.warehouse: "COMPUTE_WH"
snowflake.role: "ANALYST"

BigQuery

Docker (bigquery.properties.tpl):

connector.name=bigquery
bigquery.project-id=${BIGQUERY_PROJECT_ID}
bigquery.credentials-file=${BIGQUERY_CREDENTIALS_FILE}

Helm (values.yaml):

catalogs:
bigquery:
connector.name: "bigquery"
bigquery.project-id: "your-project-id"
bigquery.credentials-file: "/etc/bigquery/credentials.json"

Testing Connectors

Verify Catalog

Docker:

docker exec dashboard-trino trino --execute "SHOW CATALOGS"

Kubernetes:

kubectl exec -n actyze deployment/dashboard-trino -- trino --execute "SHOW CATALOGS"

Test Query

# Docker
docker exec dashboard-trino trino --execute "SELECT * FROM mysql.production.customers LIMIT 5"

# Kubernetes
kubectl exec -n actyze deployment/dashboard-trino -- \
trino --execute "SELECT * FROM snowflake.analytics.sales LIMIT 5"

Troubleshooting

Connector Not Appearing

Check catalog files:

# Docker
docker exec dashboard-trino ls -la /etc/trino/catalog/

# Kubernetes
kubectl exec -n actyze deployment/dashboard-trino -- ls -la /etc/trino/catalog/

Check Trino logs:

# Docker
docker logs dashboard-trino

# Kubernetes
kubectl logs -n actyze deployment/dashboard-trino

Connection Errors

Verify network connectivity:

# Docker
docker exec dashboard-trino ping mysql-server

# Kubernetes
kubectl exec -n actyze deployment/dashboard-trino -- ping mysql-server

Check credentials:

# Docker
docker exec dashboard-trino env | grep MYSQL

# Kubernetes
kubectl get secret dashboard-secrets -n actyze -o yaml

Permission Errors

Ensure database user has required permissions:

-- MySQL
GRANT SELECT ON database.* TO 'actyze_user'@'%';

-- PostgreSQL
GRANT SELECT ON ALL TABLES IN SCHEMA public TO actyze_user;

-- Snowflake
GRANT USAGE ON WAREHOUSE COMPUTE_WH TO ROLE analyst;
GRANT SELECT ON ALL TABLES IN SCHEMA analytics.public TO ROLE analyst;

Best Practices

Security

  • Use read-only database accounts for Actyze
  • Store credentials in secrets (Kubernetes secrets or environment variables)
  • Enable SSL/TLS for database connections
  • Use service accounts with minimal permissions

Performance

  • Configure connection pooling for high-traffic databases
  • Use query result caching where appropriate
  • Limit concurrent queries per connector
  • Monitor query performance and optimize indexes

High Availability

  • Configure multiple connection URLs for failover
  • Use connection retry logic
  • Monitor connector health
  • Set appropriate connection timeouts

Additional Resources

Support

For connector-specific issues:

  1. Check Trino connector documentation
  2. Verify database connectivity and permissions
  3. Review Trino logs for detailed error messages
  4. Test connection outside of Actyze using Trino CLI