At Woltex, we run n8n to automate our business operations. But running n8n in production isn't just spinning up a container. You need queue workers for reliability, metrics for visibility, and monitoring to catch issues before they become problems. Here's our complete production stack.
n8n is powerful for workflow automation, but out-of-the-box it's missing production essentials. When a workflow fails at 3 AM, you need to know. When queue depth hits 500, you need visibility. When your automation infrastructure is business-critical, you need monitoring.
Queue workers handle workloads without blocking the main instance
Prometheus metrics expose every aspect of your n8n instance
Pre-built Grafana dashboard with alerts and visualization
┌───────────────────────────────────────────────────┐ │ Production Stack │ ├───────────────────────────────────────────────────┤ │ │ │ ┌──────────┐ ┌────────────────────────────┐ │ │ │ n8n │────>│ PostgreSQL Database │ │ │ │ Main │ │ (workflows + executions) │ │ │ └──────────┘ └────────────────────────────┘ │ │ │ │ │ │ │ │ v │ │ ┌──────────────────────────────────────────┐ │ │ │ Redis Queue │ │ │ │ (job distribution & coordination) │ │ │ └──────────────────────────────────────────┘ │ │ │ │ │ │ v v │ │ ┌──────────┐ ┌──────────┐ │ │ │ Worker │ │ Worker │ │ │ │ #1 │ │ #2 │ │ │ └──────────┘ └──────────┘ │ │ │ │ │ │ └────────┬───────────┘ │ │ v │ │ ┌───────────────┐ │ │ │ Prometheus │ │ │ │ (metrics) │ │ │ └───────────────┘ │ │ │ │ │ v │ │ ┌───────────────┐ │ │ │ Grafana │ │ │ │ (dashboards) │ │ │ └───────────────┘ │ │ │ └───────────────────────────────────────────────────┘
The foundation is a Docker Compose setup that orchestrates n8n with queue workers, Redis, PostgreSQL, and the monitoring stack.
version: '3.8'
services:
prometheus:
image: prom/prometheus:latest
restart: unless-stopped
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
ports:
- "127.0.0.1:9090:9090" # Only localhost
networks:
- n8n-network
grafana:
image: grafana/grafana:latest
restart: unless-stopped
environment:
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD}
GF_PATHS_PROVISIONING: /etc/grafana/provisioning
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources:ro
- ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards:ro
- ./grafana/dashboards:/var/lib/grafana/dashboards:ro
ports:
- "127.0.0.1:3000:3000"
depends_on:
- prometheus
networks:
- n8n-network
postgres:
image: postgres:16
restart: unless-stopped
environment:
POSTGRES_DB: n8n
POSTGRES_USER: n8n
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U n8n']
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
cpus: '1'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
networks:
- n8n-network
redis:
image: redis:7-alpine
restart: unless-stopped
command: redis-server --appendonly yes --maxmemory 512mb --maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
networks:
- n8n-network
n8n:
image: n8nio/n8n:latest
restart: unless-stopped
ports:
- "5678:5678"
environment:
# Database Configuration
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: postgres
DB_POSTGRESDB_PORT: 5432
DB_POSTGRESDB_DATABASE: n8n
DB_POSTGRESDB_USER: n8n
DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD}
# n8n Host Configuration
N8N_HOST: ${N8N_HOST}
N8N_PROTOCOL: https
N8N_PORT: 5678
WEBHOOK_URL: https://${N8N_HOST}/
# Security
N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY}
N8N_USER_MANAGEMENT_JWT_SECRET: ${JWT_SECRET}
# Queue Mode Configuration (CRITICAL)
EXECUTIONS_MODE: queue
QUEUE_BULL_REDIS_HOST: redis
QUEUE_BULL_REDIS_PORT: 6379
QUEUE_BULL_REDIS_DB: 0
# Worker Health Check
QUEUE_HEALTH_CHECK_ACTIVE: "true"
# Runners (for AI Agent workflows)
N8N_RUNNERS_AUTH_TOKEN: ${N8N_RUNNERS_AUTH_TOKEN}
# Execution Data Management
EXECUTIONS_DATA_SAVE_ON_ERROR: all
EXECUTIONS_DATA_SAVE_ON_SUCCESS: all
EXECUTIONS_DATA_SAVE_ON_PROGRESS: "true"
EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS: "true"
EXECUTIONS_DATA_PRUNE: "true"
EXECUTIONS_DATA_MAX_AGE: 336 # 14 days in hours
# Offload manual executions to workers (recommended)
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS: ${OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS}
# Binary Data Storage (IMPORTANT for queue mode)
N8N_DEFAULT_BINARY_DATA_MODE: filesystem
# Concurrency for Main Process (handles webhooks, UI, schedules)
N8N_CONCURRENCY_PRODUCTION_LIMIT: 3
# Payload Configuration
N8N_PAYLOAD_SIZE_MAX: 16
# Metrics & Monitoring
N8N_METRICS: "true"
N8N_METRICS_INCLUDE_WORKFLOW_ID_LABEL: "true"
N8N_METRICS_INCLUDE_NODE_TYPE_LABEL: "true"
N8N_METRICS_INCLUDE_CREDENTIAL_TYPE_LABEL: "true"
# Proxy Settings (if using Cloudflare Tunnel or reverse proxy)
N8N_TRUST_PROXY: ${N8N_TRUST_PROXY}
N8N_SECURE_COOKIE: ${N8N_SECURE_COOKIE}
# Logging
N8N_LOG_LEVEL: info
N8N_LOG_OUTPUT: console,file
N8N_LOG_FILE_LOCATION: /home/node/.n8n/logs/
N8N_LOG_FILE_MAX_COUNT: 7
# Timezone
GENERIC_TIMEZONE: Europe/London
TZ: Europe/London
volumes:
- n8n_data:/home/node/.n8n
- n8n_files:/files
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:5678/healthz || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
deploy:
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '1'
memory: 1G
networks:
- n8n-network
# Worker 1 - Handles workflow executions from queue
n8n-worker:
image: n8nio/n8n:latest
restart: unless-stopped
command: worker
environment:
# Database Configuration
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: postgres
DB_POSTGRESDB_PORT: 5432
DB_POSTGRESDB_DATABASE: n8n
DB_POSTGRESDB_USER: n8n
DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD}
# Queue Mode Configuration
EXECUTIONS_MODE: queue
QUEUE_BULL_REDIS_HOST: redis
QUEUE_BULL_REDIS_PORT: 6379
QUEUE_BULL_REDIS_DB: 0
# Worker Health Check
QUEUE_HEALTH_CHECK_ACTIVE: "true"
# Security
N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY}
# Runners (for AI Agent workflows)
N8N_RUNNERS_AUTH_TOKEN: ${N8N_RUNNERS_AUTH_TOKEN}
# Worker Concurrency (adjust based on your workload)
N8N_CONCURRENCY_PRODUCTION_LIMIT: 10
# Binary Data Storage
N8N_DEFAULT_BINARY_DATA_MODE: filesystem
# Logging
N8N_LOG_LEVEL: info
N8N_LOG_OUTPUT: console
# Timezone
GENERIC_TIMEZONE: Europe/London
TZ: Europe/London
volumes:
- n8n_data:/home/node/.n8n
- n8n_files:/files
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
n8n:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:5678/healthz || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
deploy:
replicas: 2 # Start with 2 workers
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '1'
memory: 1G
networks:
- n8n-network
volumes:
postgres_data:
driver: local
redis_data:
driver: local
n8n_data:
driver: local
n8n_files:
driver: local
prometheus_data:
driver: local
grafana_data:
driver: local
networks:
n8n-network:
driver: bridgeKey points:
EXECUTIONS_MODE=queueQUEUE_BULL_REDIS_HOSTPrometheus needs to know where to scrape metrics. n8n exposes metrics at /metrics when configured properly.
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'n8n'
static_configs:
- targets: ['n8n:5678']
metrics_path: '/metrics'What gets collected:
Grafana provisioning means dashboards and data sources are automatically loaded when Grafana starts. No manual clicking through the UI.
Save as: grafana/provisioning/dashboards/dashboards.yml
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboardsSave as: grafana/provisioning/datasources/prometheus.yml
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus:9090
isDefault: true
editable: falseComplete production-ready dashboard with real-time monitoring of n8n performance, memory usage, event loop lag, garbage collection, and more.
Save as: grafana/dashboards/n8n.json
{
"title": "n8n Production Monitoring",
"uid": "n8n-prod",
"timezone": "browser",
"schemaVersion": 38,
"version": 2,
"refresh": "10s",
"time": {
"from": "now-6h",
"to": "now"
},
"tags": ["n8n", "production"],
"panels": [
{
"id": 1,
"type": "stat",
"title": "Active Workflows",
"gridPos": {"h": 6, "w": 4, "x": 0, "y": 0},
"targets": [
{
"expr": "n8n_active_workflow_count",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": {"mode": "thresholds"},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "green", "value": null},
{"color": "yellow", "value": 5},
{"color": "red", "value": 10}
]
},
"unit": "short"
}
},
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
}
},
{
"id": 2,
"type": "stat",
"title": "Leader Status",
"gridPos": {"h": 6, "w": 4, "x": 4, "y": 0},
"targets": [
{
"expr": "n8n_instance_role_leader",
"refId": "A",
"instant": true
}
],
"fieldConfig": {
"defaults": {
"color": {"mode": "thresholds"},
"mappings": [
{
"type": "value",
"options": {
"0": {"text": "Follower", "color": "yellow"},
"1": {"text": "Leader", "color": "green"}
}
}
],
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "gray", "value": null}
]
}
}
},
"options": {
"colorMode": "background",
"graphMode": "none",
"justifyMode": "center",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "value_and_name"
}
},
{
"id": 3,
"type": "stat",
"title": "n8n Version",
"gridPos": {"h": 6, "w": 4, "x": 8, "y": 0},
"targets": [
{
"expr": "n8n_version_info",
"refId": "A",
"instant": true,
"format": "table"
}
],
"fieldConfig": {
"defaults": {
"color": {"mode": "fixed", "fixedColor": "blue"},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "blue", "value": null}
]
}
}
},
"options": {
"colorMode": "background",
"graphMode": "none",
"justifyMode": "center",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "/^version$/",
"values": true
},
"textMode": "value"
},
"transformations": [
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Value": true,
"major": true,
"minor": true,
"patch": true
},
"indexByName": {},
"renameByName": {}
}
}
]
},
{
"id": 4,
"type": "stat",
"title": "Memory Usage",
"gridPos": {"h": 6, "w": 4, "x": 12, "y": 0},
"targets": [
{
"expr": "n8n_process_resident_memory_bytes / 1024 / 1024",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": {"mode": "thresholds"},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "green", "value": null},
{"color": "yellow", "value": 1024},
{"color": "red", "value": 1536}
]
},
"unit": "decmbytes"
}
},
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
}
},
{
"id": 5,
"type": "stat",
"title": "CPU Usage",
"gridPos": {"h": 6, "w": 4, "x": 16, "y": 0},
"targets": [
{
"expr": "rate(n8n_process_cpu_seconds_total[1m]) * 100",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": {"mode": "thresholds"},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "green", "value": null},
{"color": "yellow", "value": 60},
{"color": "red", "value": 80}
]
},
"unit": "percent"
}
},
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
}
},
{
"id": 6,
"type": "stat",
"title": "Open File Descriptors",
"gridPos": {"h": 6, "w": 4, "x": 20, "y": 0},
"targets": [
{
"expr": "n8n_process_open_fds",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": {"mode": "thresholds"},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "green", "value": null},
{"color": "yellow", "value": 200},
{"color": "red", "value": 400}
]
},
"unit": "short",
"max": 524288
}
},
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
}
},
{
"id": 7,
"type": "timeseries",
"title": "Event Loop Lag",
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 6},
"targets": [
{
"expr": "n8n_nodejs_eventloop_lag_seconds * 1000",
"refId": "A",
"legendFormat": "Current Lag"
},
{
"expr": "n8n_nodejs_eventloop_lag_p99_seconds * 1000",
"refId": "B",
"legendFormat": "P99"
},
{
"expr": "n8n_nodejs_eventloop_lag_p90_seconds * 1000",
"refId": "C",
"legendFormat": "P90"
}
],
"fieldConfig": {
"defaults": {
"color": {"mode": "palette-classic"},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"tooltip": false,
"viz": false,
"legend": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "green", "value": null},
{"color": "yellow", "value": 50},
{"color": "red", "value": 100}
]
},
"unit": "ms"
}
},
"options": {
"legend": {
"calcs": ["last", "max"],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "multi",
"sort": "none"
}
}
},
{
"id": 8,
"type": "timeseries",
"title": "Memory Usage Over Time",
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 6},
"targets": [
{
"expr": "n8n_process_resident_memory_bytes / 1024 / 1024",
"refId": "A",
"legendFormat": "Resident Memory"
},
{
"expr": "n8n_nodejs_heap_size_used_bytes / 1024 / 1024",
"refId": "B",
"legendFormat": "Heap Used"
},
{
"expr": "n8n_nodejs_heap_size_total_bytes / 1024 / 1024",
"refId": "C",
"legendFormat": "Heap Total"
}
],
"fieldConfig": {
"defaults": {
"color": {"mode": "palette-classic"},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"tooltip": false,
"viz": false,
"legend": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "green", "value": null}
]
},
"unit": "decmbytes"
}
},
"options": {
"legend": {
"calcs": ["last", "max"],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "multi",
"sort": "none"
}
}
},
{
"id": 9,
"type": "timeseries",
"title": "Garbage Collection Duration",
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 14},
"targets": [
{
"expr": "rate(n8n_nodejs_gc_duration_seconds_sum{kind="major"}[1m])",
"refId": "A",
"legendFormat": "Major GC"
},
{
"expr": "rate(n8n_nodejs_gc_duration_seconds_sum{kind="minor"}[1m])",
"refId": "B",
"legendFormat": "Minor GC"
},
{
"expr": "rate(n8n_nodejs_gc_duration_seconds_sum{kind="incremental"}[1m])",
"refId": "C",
"legendFormat": "Incremental GC"
}
],
"fieldConfig": {
"defaults": {
"color": {"mode": "palette-classic"},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"tooltip": false,
"viz": false,
"legend": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "green", "value": null}
]
},
"unit": "s"
}
},
"options": {
"legend": {
"calcs": ["last", "max"],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "multi",
"sort": "none"
}
}
},
{
"id": 10,
"type": "timeseries",
"title": "Active Resources",
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 14},
"targets": [
{
"expr": "n8n_nodejs_active_resources_total",
"refId": "A",
"legendFormat": "Total Active Resources"
},
{
"expr": "n8n_nodejs_active_handles_total",
"refId": "B",
"legendFormat": "Active Handles"
},
{
"expr": "n8n_nodejs_active_requests_total",
"refId": "C",
"legendFormat": "Active Requests"
}
],
"fieldConfig": {
"defaults": {
"color": {"mode": "palette-classic"},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"tooltip": false,
"viz": false,
"legend": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "green", "value": null}
]
},
"unit": "short"
}
},
"options": {
"legend": {
"calcs": ["last", "max"],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "multi",
"sort": "none"
}
}
},
{
"id": 11,
"type": "bargauge",
"title": "Heap Space Usage",
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 22},
"targets": [
{
"expr": "n8n_nodejs_heap_space_size_used_bytes / n8n_nodejs_heap_space_size_total_bytes * 100",
"refId": "A",
"legendFormat": "{{space}}"
}
],
"fieldConfig": {
"defaults": {
"color": {"mode": "thresholds"},
"mappings": [],
"max": 100,
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "green", "value": null},
{"color": "yellow", "value": 70},
{"color": "red", "value": 90}
]
},
"unit": "percent"
}
},
"options": {
"displayMode": "gradient",
"minVizHeight": 10,
"minVizWidth": 0,
"orientation": "horizontal",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"showUnfilled": true
}
},
{
"id": 12,
"type": "stat",
"title": "Process Uptime",
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 22},
"targets": [
{
"expr": "(time() - n8n_process_start_time_seconds)",
"refId": "A"
}
],
"fieldConfig": {
"defaults": {
"color": {"mode": "thresholds"},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{"color": "red", "value": null},
{"color": "yellow", "value": 3600},
{"color": "green", "value": 86400}
]
},
"unit": "s"
}
},
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "center",
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"textMode": "auto"
}
}
]
}Dashboard includes: Active workflows, leader status, version info, memory/CPU usage, event loop lag (P90/P99), garbage collection metrics, heap space usage, and process uptime.
Pro tip: Export dashboards from Grafana as JSON, commit them to git, and they'll load automatically on every new deployment.
Don't hardcode credentials in docker-compose.yml. Create a .env file in the same directory:
# Generate secure keys with: openssl rand -hex 32
N8N_ENCRYPTION_KEY=your-64-char-hex-key-here
JWT_SECRET=your-64-char-hex-key-here
# Database password (generate with: openssl rand -base64 24)
POSTGRES_PASSWORD=your-secure-db-password
# Domain configuration
N8N_HOST=n8n.yourdomain.com
# Runners authentication token (generate with: openssl rand -base64 48)
N8N_RUNNERS_AUTH_TOKEN=your-runners-token-here
# Move manual executions to workers (recommended)
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true
# Proxy settings (if using Cloudflare Tunnel or reverse proxy)
N8N_TRUST_PROXY=true
N8N_SECURE_COOKIE=false
# Grafana admin password (generate with: openssl rand -base64 24)
GRAFANA_PASSWORD=your-grafana-passwordQuick setup: Run these commands to generate all secrets at once:
echo "N8N_ENCRYPTION_KEY=$(openssl rand -hex 32)" >> .env
echo "JWT_SECRET=$(openssl rand -hex 32)" >> .env
echo "POSTGRES_PASSWORD=$(openssl rand -base64 24)" >> .env
echo "N8N_RUNNERS_AUTH_TOKEN=$(openssl rand -base64 48)" >> .env
echo "GRAFANA_PASSWORD=$(openssl rand -base64 24)" >> .env
echo "N8N_HOST=n8n.yourdomain.com" >> .env
echo "OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true" >> .env
echo "N8N_TRUST_PROXY=true" >> .env
echo "N8N_SECURE_COOKIE=false" >> .envStart with 2 workers. Monitor queue depth in Grafana. If jobs back up, scale horizontally:
docker-compose up -d --scale n8n-worker=4Configure Grafana alerts for critical conditions:
Your workflows and execution history live in PostgreSQL. Automate backups:
# Daily backup cron
0 2 * * * docker exec n8n-postgres pg_dump -U n8n n8n > /backups/n8n-$(date +%Y%m%d).sqlDon't expose ports directly to the internet. Put everything behind a reverse proxy (nginx, Traefik) with TLS, or use a tunnel solution like Cloudflare Tunnel.
mkdir n8n-production && cd n8n-productiondocker-compose.yml (from Step 1 above)prometheus.yml (from Step 2):mkdir -p prometheus
# Paste prometheus.yml content heremkdir -p grafana/provisioning/dashboards
mkdir -p grafana/provisioning/datasources
mkdir -p grafana/dashboards
# Paste dashboards.yml into grafana/provisioning/dashboards/
# Paste prometheus.yml into grafana/provisioning/datasources/
# Paste n8n.json into grafana/dashboards/.env file with secrets (use the quick setup commands from "Running It in Production" section above)docker-compose up -dhttp://localhost:3000 (login with the password from your .env file)http://localhost:5678First time? Give it 30 seconds to fully start up. Check logs with docker-compose logs -f if something doesn't load.
💡 This is how we run n8n at Woltex. Full visibility, reliable execution, zero surprises. Copy the configs, adjust for your needs, and ship it.
The AI workspace built for production. Access all models, infinite canvas for complex workflows, and tools designed for real-world projects, not just code generation. Join the waitlist to get early adopter perks.