Skip to main content
Dude LemonDude Lemon
WorkAboutBlogCareers
LoginLet's Talk
Home/Blog/Docker and Docker Compose for Node.js: A Production Guide
DevOps

Docker and Docker Compose for Node.js: A Production Guide

Learn how to containerize a Node.js application with Docker, orchestrate multi-service stacks with Docker Compose, and deploy production-ready containers with health checks, volumes, and networking.

DL
Shantanu Kumar
Chief Solutions Architect
March 13, 2026
21 min read
Updated March 2026
XinCopy
Docker containers and cloud infrastructure visualization
Docker transforms "it works on my machine" into "it works everywhere."

Docker has become the standard for packaging and deploying Node.js applications. Instead of configuring servers manually and hoping the production environment matches development, you ship a container that includes your application, its dependencies, and its runtime — identical everywhere it runs.

This guide covers the Docker workflow we use at Dude Lemon for production client deployments: writing optimized Dockerfiles, building multi-service stacks with Docker Compose, handling environment variables securely, implementing health checks, and deploying containers to AWS EC2 and other cloud platforms.

Containers do not replace good architecture. They enforce it.

1) Writing a Production Dockerfile

Most Node.js Dockerfiles you find online are single-stage builds that include development dependencies, lack proper user permissions, and produce images 3-4x larger than necessary. A production Dockerfile should use multi-stage builds, run as a non-root user, and include only what the application needs to run.

dockerfileDockerfile
1# ── Stage 1: Install dependencies ────────────────────────────
2FROM node:20-alpine AS deps
3WORKDIR /app
4
5COPY package.json package-lock.json ./
6RUN npm ci --only=production && npm cache clean --force
7
8# ── Stage 2: Build (if you have a build step) ───────────────
9FROM node:20-alpine AS build
10WORKDIR /app
11
12COPY package.json package-lock.json ./
13RUN npm ci
14COPY . .
15RUN npm run build
16
17# ── Stage 3: Production image ────────────────────────────────
18FROM node:20-alpine AS production
19WORKDIR /app
20
21# Security: run as non-root user
22RUN addgroup -g 1001 -S appgroup && \
23 adduser -S appuser -u 1001 -G appgroup
24
25# Copy only production dependencies and built assets
26COPY --from=deps /app/node_modules ./node_modules
27COPY --from=build /app/dist ./dist
28COPY package.json ./
29
30# Set environment
31ENV NODE_ENV=production
32ENV PORT=3000
33
34# Switch to non-root user
35USER appuser
36
37EXPOSE 3000
38
39# Health check
40HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
41 CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
42
43CMD ["node", "dist/server.js"]
  • Multi-stage builds — separate dependency installation, build step, and runtime into distinct stages. The final image only contains production code.
  • Alpine base — node:20-alpine is ~50MB vs ~350MB for the full image. Smaller images deploy faster and have fewer vulnerabilities.
  • npm ci — installs exact versions from package-lock.json. Never use npm install in Docker builds.
  • Non-root user — if an attacker exploits your application, they cannot escalate to root privileges inside the container.
  • HEALTHCHECK — Docker and orchestrators use this to know when your container is ready to receive traffic.

2) The .dockerignore File

Without a .dockerignore file, Docker copies everything in your project directory into the build context — including node_modules, .git, environment files, and test fixtures. This makes builds slower and can leak secrets into your image.

bash.dockerignore
1node_modules
2npm-debug.log
3.git
4.gitignore
5.env
6.env.*
7*.md
8tests
9coverage
10.nyc_output
11.vscode
12.idea
13docker-compose*.yml
14Dockerfile*
Container orchestration diagram showing multiple services
Docker Compose orchestrates your entire application stack — API, database, cache, and reverse proxy.

3) Docker Compose for Multi-Service Applications

Production Node.js applications rarely run in isolation. A typical stack includes the API server, a PostgreSQL database, a Redis cache, and an Nginx reverse proxy. Docker Compose defines this entire stack in a single YAML file and launches all services with one command.

yamldocker-compose.yml
1version: '3.8'
2
3services:
4 api:
5 build:
6 context: .
7 dockerfile: Dockerfile
8 target: production
9 container_name: app-api
10 restart: unless-stopped
11 env_file: .env
12 ports:
13 - "3000:3000"
14 depends_on:
15 postgres:
16 condition: service_healthy
17 redis:
18 condition: service_healthy
19 networks:
20 - app-network
21 healthcheck:
22 test: ["CMD", "wget", "--spider", "-q", "http://localhost:3000/health"]
23 interval: 30s
24 timeout: 5s
25 retries: 3
26 start_period: 15s
27
28 postgres:
29 image: postgres:16-alpine
30 container_name: app-postgres
31 restart: unless-stopped
32 environment:
33 POSTGRES_DB: ${DB_NAME}
34 POSTGRES_USER: ${DB_USER}
35 POSTGRES_PASSWORD: ${DB_PASSWORD}
36 volumes:
37 - postgres-data:/var/lib/postgresql/data
38 - ./scripts/init.sql:/docker-entrypoint-initdb.d/init.sql
39 ports:
40 - "5432:5432"
41 networks:
42 - app-network
43 healthcheck:
44 test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}"]
45 interval: 10s
46 timeout: 5s
47 retries: 5
48
49 redis:
50 image: redis:7-alpine
51 container_name: app-redis
52 restart: unless-stopped
53 command: redis-server --requirepass ${REDIS_PASSWORD}
54 volumes:
55 - redis-data:/data
56 ports:
57 - "6379:6379"
58 networks:
59 - app-network
60 healthcheck:
61 test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
62 interval: 10s
63 timeout: 5s
64 retries: 5
65
66 nginx:
67 image: nginx:alpine
68 container_name: app-nginx
69 restart: unless-stopped
70 ports:
71 - "80:80"
72 - "443:443"
73 volumes:
74 - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
75 - ./nginx/ssl:/etc/nginx/ssl:ro
76 depends_on:
77 api:
78 condition: service_healthy
79 networks:
80 - app-network
81
82volumes:
83 postgres-data:
84 redis-data:
85
86networks:
87 app-network:
88 driver: bridge

Key patterns in this configuration: depends_on with condition: service_healthy ensures your API does not start until PostgreSQL and Redis are actually ready — not just running. Named volumes (postgres-data, redis-data) persist data across container restarts.

4) Environment Variables in Docker

Never bake secrets into your Docker image. Environment variables should be injected at runtime through env_file in Docker Compose or through your cloud provider's secrets management. For secure secrets handling patterns, see our guide on securing Node.js applications in production.

bash.env (never committed to git)
1NODE_ENV=production
2PORT=3000
3
4# Database
5DB_NAME=myapp
6DB_USER=myapp_user
7DB_PASSWORD=strong_random_password_here
8DATABASE_URL=postgresql://myapp_user:strong_random_password_here@postgres:5432/myapp
9
10# Redis
11REDIS_PASSWORD=another_strong_password
12REDIS_URL=redis://:another_strong_password@redis:6379
13
14# Auth
15JWT_ACCESS_SECRET=your-access-secret
16JWT_REFRESH_SECRET=your-refresh-secret
17
18# CORS
19ALLOWED_ORIGINS=https://myapp.com,https://admin.myapp.com

Notice the database host is postgres, not localhost. Inside a Docker network, services reference each other by their service name defined in docker-compose.yml. This is one of the most common mistakes when moving from local development to Docker.

5) Nginx Reverse Proxy Configuration

Nginx sits in front of your Node.js API, handling SSL termination, static file serving, request buffering, and load balancing. This is the same pattern we covered in our EC2 deployment guide, containerized for Docker.

nginxnginx/nginx.conf
1events {
2 worker_connections 1024;
3}
4
5http {
6 upstream api {
7 server api:3000;
8 }
9
10 # Rate limiting
11 limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
12
13 server {
14 listen 80;
15 server_name myapp.com;
16
17 # Redirect HTTP to HTTPS
18 return 301 https://$host$request_uri;
19 }
20
21 server {
22 listen 443 ssl http2;
23 server_name myapp.com;
24
25 ssl_certificate /etc/nginx/ssl/fullchain.pem;
26 ssl_certificate_key /etc/nginx/ssl/privkey.pem;
27
28 # Security headers
29 add_header X-Frame-Options "SAMEORIGIN" always;
30 add_header X-Content-Type-Options "nosniff" always;
31 add_header X-XSS-Protection "1; mode=block" always;
32 add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
33
34 # Gzip compression
35 gzip on;
36 gzip_types text/plain application/json application/javascript text/css;
37 gzip_min_length 1000;
38
39 location /api {
40 limit_req zone=api_limit burst=20 nodelay;
41
42 proxy_pass http://api;
43 proxy_http_version 1.1;
44 proxy_set_header Upgrade $http_upgrade;
45 proxy_set_header Connection 'upgrade';
46 proxy_set_header Host $host;
47 proxy_set_header X-Real-IP $remote_addr;
48 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
49 proxy_set_header X-Forwarded-Proto $scheme;
50 proxy_cache_bypass $http_upgrade;
51 }
52
53 location /health {
54 proxy_pass http://api;
55 }
56 }
57}

6) Health Check Endpoints

Docker, Nginx, and load balancers all need to know if your application is healthy. A proper health check endpoint verifies not just that the process is running, but that it can reach its dependencies — database, cache, and external services.

javascriptroutes/health.js
1import pool from '../db/pool.js'
2import { createClient } from 'redis'
3
4const redis = createClient({ url: process.env.REDIS_URL })
5redis.connect()
6
7export async function healthCheck(req, res) {
8 const checks = {
9 status: 'healthy',
10 timestamp: new Date().toISOString(),
11 uptime: process.uptime(),
12 services: {},
13 }
14
15 // Check PostgreSQL
16 try {
17 await pool.query('SELECT 1')
18 checks.services.postgres = 'connected'
19 } catch {
20 checks.services.postgres = 'disconnected'
21 checks.status = 'degraded'
22 }
23
24 // Check Redis
25 try {
26 await redis.ping()
27 checks.services.redis = 'connected'
28 } catch {
29 checks.services.redis = 'disconnected'
30 checks.status = 'degraded'
31 }
32
33 const statusCode = checks.status === 'healthy' ? 200 : 503
34 res.status(statusCode).json(checks)
35}

Return a 503 status code when any dependency is down. This tells load balancers and Docker to stop routing traffic to this instance and potentially restart it based on the HEALTHCHECK configuration.

7) Docker Networking Deep Dive

Docker Compose creates an isolated network for your services. Understanding how container networking works prevents the most common Docker debugging headaches.

  • Service discovery — containers reference each other by service name (postgres, redis, api), not by IP address
  • Port mapping — 3000:3000 maps host port to container port. Only expose ports you need accessible from outside Docker
  • Internal communication — services on the same Docker network communicate directly without port mapping
  • Bridge network — the default driver isolates your stack from other containers on the same host
  • DNS resolution — Docker provides built-in DNS so service names resolve automatically within the network

8) Volume Management for Persistent Data

Containers are ephemeral — when they stop, their filesystem is destroyed. Volumes persist data independently of the container lifecycle. For databases and caches, volumes are critical.

bashDocker volume commands
1# List all volumes
2docker volume ls
3
4# Inspect a specific volume
5docker volume inspect app_postgres-data
6
7# Back up PostgreSQL data
8docker exec app-postgres pg_dump -U myapp_user myapp > backup.sql
9
10# Restore from backup
11docker exec -i app-postgres psql -U myapp_user myapp < backup.sql
12
13# Remove unused volumes (careful!)
14docker volume prune
15
16# Copy files from container to host
17docker cp app-api:/app/logs ./local-logs
Global cloud deployment infrastructure with data centers
Docker containers deploy identically across development, staging, and production environments worldwide.

9) Production Deployment Workflow

Here is the deployment workflow we use for Docker-based applications at Dude Lemon. This integrates with GitHub Actions for CI/CD and deploys to AWS EC2 instances.

bashscripts/deploy.sh
1#!/bin/bash
2set -euo pipefail
3
4echo "🚀 Starting deployment..."
5
6# Pull latest code
7cd /var/www/myapp
8git pull origin main
9
10# Build new images
11docker compose build --no-cache api
12
13# Run database migrations
14docker compose exec api npm run migrate
15
16# Rolling restart — zero downtime
17docker compose up -d --no-deps api
18
19# Wait for health check
20echo "Waiting for health check..."
21for i in {1..30}; do
22 if curl -sf http://localhost:3000/health > /dev/null 2>&1; then
23 echo "Health check passed"
24 break
25 fi
26 sleep 2
27done
28
29# Clean up old images
30docker image prune -f
31
32echo "Deployment complete"

The --no-deps flag restarts only the API container without touching PostgreSQL, Redis, or Nginx. Combined with the health check wait loop, this gives you zero-downtime deployments for your application updates.

10) Docker Security Best Practices

  • Always run as a non-root user inside containers
  • Use specific image tags (node:20-alpine), never :latest
  • Scan images for vulnerabilities with docker scout or Trivy
  • Use read-only filesystem where possible: read_only: true
  • Limit container resources with mem_limit and cpus
  • Never store secrets in Dockerfiles or image layers
  • Use multi-stage builds to minimize attack surface
  • Keep base images updated for security patches

11) Debugging Docker Containers

bashEssential Docker debugging commands
1# View running containers
2docker compose ps
3
4# Stream logs from a service
5docker compose logs -f api
6
7# Execute a command inside a running container
8docker compose exec api sh
9
10# Check container resource usage
11docker stats
12
13# Inspect container networking
14docker network inspect app_app-network
15
16# View container events
17docker events --filter container=app-api
18
19# Check why a container exited
20docker inspect app-api --format='{{.State.ExitCode}} {{.State.Error}}'

Conclusion: Containers Are Production Infrastructure

Docker is not just a development convenience — it is production infrastructure. The patterns in this guide ensure your Node.js applications run reliably, securely, and consistently across every environment. Combined with PM2 cluster mode for process management and proper security hardening, Docker gives you a complete production deployment system.

At Dude Lemon, we containerize every production application we build. Whether you are deploying a single API or a multi-service platform with databases, caches, and background workers, Docker Compose gives you a reproducible, versionable, and deployable architecture. If you need help containerizing your application or building a CI/CD pipeline for Docker deployments, get in touch with our DevOps team.

Ship containers, not instructions. If your deployment requires a wiki page, you are not done yet.

Need help building this?

Let our team build it for you.

Dude Lemon builds production-grade web apps, APIs, and cloud infrastructure. Get a free consultation and project proposal within 48 hours.

Start a Project
← PreviousHow to Secure a Node.js Application in ProductionSecurity
Next →AI Workflow Automation for Small Business: Complete 2026 Implementation GuideAI Integration

In This Article

1) Writing a Production Dockerfile2) The .dockerignore File3) Docker Compose for Multi-Service Applications4) Environment Variables in Docker5) Nginx Reverse Proxy Configuration6) Health Check Endpoints7) Docker Networking Deep Dive8) Volume Management for Persistent Data9) Production Deployment Workflow10) Docker Security Best Practices11) Debugging Docker ContainersConclusion: Containers Are Production Infrastructure
Need help building this?
Dude LemonDude Lemon

Custom software development.
Built right. Shipped fast.

Start a project
Pages
HomeWorkAboutBlogCareers
Services
Custom Web App DevelopmentMobile App DevelopmentCloud Infrastructure & AI
Connect
[email protected]Schedule Intro CallContact
© 2026 Dude Lemon LLC · Los Angeles, CA
PrivacyTerms