Skip to main content
Dude LemonDude Lemon
WorkAboutBlogCareers
LoginLet's Talk
Home/Blog/Deploy a Node.js App to AWS EC2 With PM2, Nginx, and SSL
DevOps

Deploy a Node.js App to AWS EC2 With PM2, Nginx, and SSL

Step-by-step guide to deploying a Node.js application on AWS EC2 with PM2 process management, Nginx reverse proxy, free SSL certificates, and zero-downtime deployments.

DL
Shantanu Kumar
Chief Solutions Architect
March 12, 2026
20 min read
Updated March 2026
XinCopy

Deploying a Node.js application to production is where most tutorials fall short. They show you how to run node server.js on an EC2 instance and call it done. But production deployment means your app survives server restarts, handles traffic spikes, serves over HTTPS, and can be updated without dropping active connections.

This guide covers the full deployment pipeline we use at Dude Lemon for client projects on AWS: provisioning an EC2 instance, configuring PM2 for process management and clustering, setting up Nginx as a reverse proxy, installing free SSL certificates with Let's Encrypt, and building a deployment script that achieves zero-downtime updates.

Production is not a server running your code. Production is a system that keeps running your code when everything else goes wrong.

1) Provision and secure your EC2 instance

Start with an Ubuntu 24.04 LTS instance. For most Node.js applications, a t3.small (2 vCPU, 2 GB RAM) is sufficient for initial production traffic. You can always scale up later. When creating the instance, configure the security group to allow SSH (port 22), HTTP (port 80), and HTTPS (port 443).

bashInitial Server Setup
1# Connect to your instance
2ssh -i ~/.ssh/your-key.pem ubuntu@your-instance-ip
3
4# Update system packages
5sudo apt update && sudo apt upgrade -y
6
7# Install essential tools
8sudo apt install -y curl git build-essential ufw
9
10# Configure firewall
11sudo ufw allow OpenSSH
12sudo ufw allow 'Nginx Full'
13sudo ufw enable
14sudo ufw status
15
16# Set timezone
17sudo timedatectl set-timezone America/Los_Angeles

The firewall configuration is critical. UFW (Uncomplicated Firewall) blocks all incoming traffic except SSH and Nginx ports. Your Node.js application runs on a high port (like 3000) that is only accessible through the Nginx reverse proxy — never directly from the internet. This prevents direct attacks against your application server.

2) Install Node.js with NVM

Use NVM (Node Version Manager) instead of the system package manager. NVM lets you switch Node versions without sudo, run multiple versions side by side, and update without breaking system dependencies. This matters when you maintain multiple applications on the same server.

bashNode.js Installation
1# Install NVM
2curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
3
4# Reload shell profile
5source ~/.bashrc
6
7# Install latest LTS version
8nvm install --lts
9nvm alias default lts/*
10
11# Verify installation
12node --version # v22.x.x
13npm --version # 10.x.x
14
15# Install global tools
16npm install -g pm2

3) Clone and configure your application

Create a dedicated directory structure for your application. Keeping application code, logs, and configuration in predictable locations makes debugging and automation much easier. Use environment variables for all configuration — never hardcode database URLs, API keys, or secrets.

bashApplication Setup
1# Create application directory
2sudo mkdir -p /var/www/myapp
3sudo chown ubuntu:ubuntu /var/www/myapp
4
5# Clone your repository
6cd /var/www/myapp
7git clone git@github.com:your-org/your-api.git app
8cd app
9
10# Install production dependencies only
11npm ci --omit=dev
12
13# Create environment file
14cat > .env << 'EOF'
15NODE_ENV=production
16PORT=3000
17DB_HOST=your-database-host
18DB_PORT=5432
19DB_NAME=your_database
20DB_USER=your_user
21DB_PASSWORD=your_secure_password
22JWT_SECRET=your-256-bit-secret
23ALLOWED_ORIGINS=https://yourdomain.com,https://www.yourdomain.com
24EOF
25
26# Secure the environment file
27chmod 600 .env

The npm ci command is different from npm install. It does a clean install from the lockfile, which is faster, deterministic, and catches dependency drift between local development and production. The chmod 600 on the .env file ensures only the file owner can read it — other system users cannot access your secrets.

4) Configure PM2 for production process management

PM2 is a process manager that keeps your Node.js application running. It handles automatic restarts on crash, log management, cluster mode for multi-core utilization, and startup scripts that survive server reboots. Without PM2, a single uncaught exception kills your application permanently.

javascriptecosystem.config.cjs
1module.exports = {
2 apps: [
3 {
4 name: 'my-api',
5 script: './server.js',
6 instances: 'max', // Use all available CPU cores
7 exec_mode: 'cluster', // Enable cluster mode
8 max_memory_restart: '512M',
9
10 // Environment variables
11 env: {
12 NODE_ENV: 'production',
13 PORT: 3000,
14 },
15
16 // Logging
17 log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
18 error_file: '/var/www/myapp/logs/error.log',
19 out_file: '/var/www/myapp/logs/output.log',
20 merge_logs: true,
21 log_type: 'json',
22
23 // Restart behavior
24 max_restarts: 10,
25 min_uptime: '10s',
26 restart_delay: 4000,
27 autorestart: true,
28
29 // Graceful shutdown
30 kill_timeout: 5000,
31 listen_timeout: 10000,
32 shutdown_with_message: true,
33
34 // Watch (disabled in production)
35 watch: false,
36 },
37 ],
38}

Cluster mode is the most important PM2 feature for production. Node.js runs on a single thread by default, which means a quad-core server only uses 25% of its CPU. Cluster mode spawns one process per core and distributes incoming connections across all of them. On a t3.small with 2 cores, you double your throughput instantly.

bashPM2 Commands
1# Create logs directory
2mkdir -p /var/www/myapp/logs
3
4# Start application with PM2
5cd /var/www/myapp/app
6pm2 start ecosystem.config.cjs
7
8# Verify processes are running
9pm2 status
10
11# View real-time logs
12pm2 logs my-api --lines 50
13
14# Monitor CPU and memory usage
15pm2 monit
16
17# Configure PM2 to start on boot
18pm2 startup
19# Run the command PM2 outputs (sudo env PATH=...)
20pm2 save
21
22# Zero-downtime reload (graceful)
23pm2 reload my-api
24
25# Hard restart (drops connections)
26pm2 restart my-api

The difference between pm2 reload and pm2 restart is critical. Reload performs a rolling restart — it starts new processes, waits for them to be ready, then kills old ones. Active connections are never dropped. Restart kills all processes immediately and starts new ones, which means every in-flight request gets a connection reset error. Always use reload for deployments.

5) Configure Nginx as a reverse proxy

Nginx sits in front of your Node.js application and handles SSL termination, static file serving, gzip compression, request buffering, and connection management. This offloads work from Node.js so your application code focuses on business logic, not HTTP plumbing.

bashInstall Nginx
1# Install Nginx
2sudo apt install -y nginx
3
4# Remove default site
5sudo rm /etc/nginx/sites-enabled/default
6
7# Create your site configuration
8sudo nano /etc/nginx/sites-available/myapp
nginx/etc/nginx/sites-available/myapp
1upstream node_backend {
2 server 127.0.0.1:3000;
3 keepalive 64;
4}
5
6server {
7 listen 80;
8 server_name yourdomain.com www.yourdomain.com;
9
10 # Security headers
11 add_header X-Frame-Options "SAMEORIGIN" always;
12 add_header X-Content-Type-Options "nosniff" always;
13 add_header X-XSS-Protection "1; mode=block" always;
14 add_header Referrer-Policy "strict-origin-when-cross-origin" always;
15
16 # Gzip compression
17 gzip on;
18 gzip_vary on;
19 gzip_proxied any;
20 gzip_comp_level 6;
21 gzip_types text/plain text/css application/json
22 application/javascript text/xml application/xml
23 application/xml+rss text/javascript;
24
25 # Client body size limit
26 client_max_body_size 10M;
27
28 # Proxy to Node.js
29 location /api {
30 proxy_pass http://node_backend;
31 proxy_http_version 1.1;
32 proxy_set_header Upgrade $http_upgrade;
33 proxy_set_header Connection 'upgrade';
34 proxy_set_header Host $host;
35 proxy_set_header X-Real-IP $remote_addr;
36 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
37 proxy_set_header X-Forwarded-Proto $scheme;
38 proxy_cache_bypass $http_upgrade;
39
40 # Timeouts
41 proxy_connect_timeout 60s;
42 proxy_send_timeout 60s;
43 proxy_read_timeout 60s;
44 }
45
46 # Health check endpoint
47 location /health {
48 proxy_pass http://node_backend;
49 proxy_http_version 1.1;
50 proxy_set_header Host $host;
51 access_log off;
52 }
53
54 # Block common attack paths
55 location ~ /\. {
56 deny all;
57 access_log off;
58 log_not_found off;
59 }
60}
bashEnable and test Nginx
1# Enable the site
2sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
3
4# Test configuration syntax
5sudo nginx -t
6
7# Reload Nginx
8sudo systemctl reload nginx
9
10# Verify Nginx is running
11sudo systemctl status nginx

The upstream block with keepalive 64 maintains persistent connections between Nginx and Node.js, avoiding the overhead of TCP handshakes on every request. The location block that denies access to dotfiles prevents attackers from accessing .env, .git, or other sensitive hidden files through the web server.

6) Install SSL certificates with Let's Encrypt

SSL is not optional. Modern browsers flag HTTP sites as insecure, search engines penalize them in rankings, and APIs that handle any user data must encrypt traffic. Let's Encrypt provides free, auto-renewing SSL certificates through Certbot.

bashSSL Setup with Certbot
1# Install Certbot
2sudo apt install -y certbot python3-certbot-nginx
3
4# Obtain and install certificate
5sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com
6
7# Certbot automatically:
8# 1. Verifies domain ownership
9# 2. Generates certificates
10# 3. Updates Nginx config to use SSL
11# 4. Adds HTTP → HTTPS redirect
12# 5. Sets up auto-renewal cron job
13
14# Verify auto-renewal works
15sudo certbot renew --dry-run
16
17# Check certificate expiry
18sudo certbot certificates

Certbot modifies your Nginx configuration automatically. It adds SSL certificate paths, enables TLS 1.2 and 1.3, configures secure cipher suites, and adds an HTTP to HTTPS redirect. After running Certbot, your Nginx config will have a listen 443 ssl block with all the correct settings.

Certificates from Let's Encrypt expire every 90 days. The certbot package installs a systemd timer that runs renewal checks twice daily. As long as port 80 remains accessible for the ACME challenge, renewals happen automatically with zero manual intervention.

7) Automated deployment script

A deployment script turns a multi-step manual process into a single command. Good deployment scripts are idempotent (safe to run multiple times), provide clear output at each stage, and handle failures gracefully. This script pulls the latest code, installs dependencies, and performs a zero-downtime PM2 reload.

bashdeploy.sh
1#!/bin/bash
2set -euo pipefail
3
4APP_DIR="/var/www/myapp/app"
5LOG_FILE="/var/www/myapp/logs/deploy.log"
6TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
7
8log() {
9 echo "[$TIMESTAMP] $1" | tee -a "$LOG_FILE"
10}
11
12log "=== Deployment started ==="
13
14# Navigate to application directory
15cd "$APP_DIR"
16
17# Pull latest code
18log "Pulling latest changes..."
19git fetch origin main
20git reset --hard origin/main
21
22# Install dependencies
23log "Installing dependencies..."
24npm ci --omit=dev
25
26# Run database migrations (if applicable)
27# log "Running migrations..."
28# npm run migrate
29
30# Build step (if applicable)
31if [ -f "package.json" ] && grep -q '"build"' package.json; then
32 log "Building application..."
33 npm run build
34fi
35
36# Zero-downtime reload
37log "Reloading application..."
38pm2 reload ecosystem.config.cjs --update-env
39
40# Wait for processes to stabilize
41sleep 3
42
43# Health check
44STATUS=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:3000/health || true)
45if [ "$STATUS" = "200" ]; then
46 log "Health check passed (HTTP $STATUS)"
47 log "=== Deployment successful ==="
48else
49 log "WARNING: Health check returned HTTP $STATUS"
50 log "Rolling back..."
51 git reset --hard HEAD~1
52 npm ci --omit=dev
53 pm2 reload ecosystem.config.cjs --update-env
54 log "=== Rollback completed ==="
55 exit 1
56fi
bashMake deploy script executable and run
1# Make executable
2chmod +x /var/www/myapp/app/deploy.sh
3
4# Deploy from your local machine
5ssh your-server 'bash /var/www/myapp/app/deploy.sh'
6
7# Or run directly on the server
8./deploy.sh

The set -euo pipefail at the top is essential. The -e flag stops execution on any error. The -u flag treats unset variables as errors. The -o pipefail flag catches errors in piped commands. Without these, a failed git pull would still attempt npm install and pm2 reload, potentially deploying broken code.

The health check after reload is your safety net. If the application fails to start properly, the script automatically rolls back to the previous commit and reloads. This turns what would be a 2 AM emergency into a failed deployment that leaves the previous working version running.

8) Log management and monitoring

Logs are your primary debugging tool in production. PM2 manages application logs, but you also need to monitor Nginx access logs, error logs, and system resources. Set up log rotation to prevent disk space exhaustion, and use structured logging for easier parsing.

bashLog Rotation Configuration
1# Install logrotate config for PM2 logs
2pm2 install pm2-logrotate
3
4# Configure rotation settings
5pm2 set pm2-logrotate:max_size 50M
6pm2 set pm2-logrotate:retain 14
7pm2 set pm2-logrotate:compress true
8pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm
9pm2 set pm2-logrotate:workerInterval 30
10
11# View current PM2 log paths
12pm2 logs --json | head -20
13
14# Monitor system resources
15htop # Interactive process viewer
16df -h # Disk usage
17free -h # Memory usage
18ss -tlnp # Active listening ports
bashUseful monitoring commands
1# Check if your app is responding
2curl -s http://localhost:3000/health | jq
3
4# Watch PM2 process status
5watch pm2 status
6
7# Tail application errors
8pm2 logs my-api --err --lines 100
9
10# Check Nginx access logs
11sudo tail -f /var/log/nginx/access.log
12
13# Check Nginx error logs
14sudo tail -f /var/log/nginx/error.log
15
16# Find processes using port 3000
17sudo lsof -i :3000

9) Security hardening checklist

A deployed application is a target. Every server exposed to the internet receives automated attacks within minutes. These hardening steps are not optional — they are the minimum baseline for any production deployment.

  • Disable root SSH login — edit /etc/ssh/sshd_config and set PermitRootLogin no.
  • Use SSH key authentication only — set PasswordAuthentication no in sshd_config.
  • Keep packages updated — run sudo apt update && sudo apt upgrade weekly or enable unattended-upgrades.
  • Configure fail2ban to block brute-force SSH attempts automatically.
  • Never expose your Node.js port directly — always proxy through Nginx.
  • Set restrictive file permissions on .env files (chmod 600) and SSH keys (chmod 400).
  • Enable automatic security updates with unattended-upgrades package.
  • Use security groups in AWS to restrict inbound traffic to only necessary ports.
  • Rotate JWT secrets and database credentials periodically.
  • Monitor failed login attempts in /var/log/auth.log.
bashInstall and configure fail2ban
1# Install fail2ban
2sudo apt install -y fail2ban
3
4# Create local configuration
5sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
6
7# Edit configuration
8sudo nano /etc/fail2ban/jail.local
9# Set: bantime = 3600
10# Set: findtime = 600
11# Set: maxretry = 3
12
13# Enable and start
14sudo systemctl enable fail2ban
15sudo systemctl start fail2ban
16
17# Check banned IPs
18sudo fail2ban-client status sshd

10) Performance optimization with Nginx caching

If your API serves any static assets or has cacheable responses, Nginx can serve them directly without touching Node.js. For full-stack deployments where Nginx also serves your frontend build, add proper cache headers to dramatically reduce server load and improve page load times.

nginxStatic file caching block (add to Nginx config)
1# Serve static frontend build
2location / {
3 root /var/www/myapp/app/frontend/dist;
4 try_files $uri $uri/ /index.html;
5
6 # Cache static assets aggressively
7 location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff2?)$ {
8 expires 1y;
9 add_header Cache-Control "public, immutable";
10 access_log off;
11 }
12
13 # Don't cache HTML (SPA needs fresh index.html)
14 location ~* \.html$ {
15 expires -1;
16 add_header Cache-Control "no-store, no-cache, must-revalidate";
17 }
18}
19
20# Enable Nginx microcaching for API responses (optional)
21proxy_cache_path /tmp/nginx_cache levels=1:2
22 keys_zone=api_cache:10m max_size=100m
23 inactive=5m use_temp_path=off;
24
25location /api/public {
26 proxy_pass http://node_backend;
27 proxy_cache api_cache;
28 proxy_cache_valid 200 30s;
29 proxy_cache_use_stale error timeout updating;
30 add_header X-Cache-Status $upstream_cache_status;
31}

11) Setting up CI/CD with GitHub Actions

Manually SSH-ing to run deploy.sh works, but it does not scale and it introduces human error. GitHub Actions can automate the entire pipeline: run tests on every push, and deploy automatically when code merges to main. Here is a workflow that handles both.

yaml.github/workflows/deploy.yml
1name: Test and Deploy
2
3on:
4 push:
5 branches: [main]
6 pull_request:
7 branches: [main]
8
9jobs:
10 test:
11 runs-on: ubuntu-latest
12
13 services:
14 postgres:
15 image: postgres:16
16 env:
17 POSTGRES_DB: test_db
18 POSTGRES_USER: test_user
19 POSTGRES_PASSWORD: test_pass
20 ports:
21 - 5432:5432
22 options: >-
23 --health-cmd pg_isready
24 --health-interval 10s
25 --health-timeout 5s
26 --health-retries 5
27
28 steps:
29 - uses: actions/checkout@v4
30 - uses: actions/setup-node@v4
31 with:
32 node-version: 22
33 cache: npm
34
35 - run: npm ci
36 - run: npm test
37 env:
38 DB_HOST: localhost
39 DB_PORT: 5432
40 DB_NAME: test_db
41 DB_USER: test_user
42 DB_PASSWORD: test_pass
43 JWT_SECRET: test-secret-key
44
45 deploy:
46 needs: test
47 if: github.ref == 'refs/heads/main' && github.event_name == 'push'
48 runs-on: ubuntu-latest
49
50 steps:
51 - name: Deploy to production
52 uses: appleboy/ssh-action@v1
53 with:
54 host: ${{ secrets.SERVER_HOST }}
55 username: ${{ secrets.SERVER_USER }}
56 key: ${{ secrets.SSH_PRIVATE_KEY }}
57 script: bash /var/www/myapp/app/deploy.sh

The test job spins up a real PostgreSQL database in a Docker container and runs your test suite against it. No mocks, no SQLite substitutes — tests hit the same database engine you use in production. The deploy job only runs after tests pass and only on pushes to main, preventing broken code from reaching production.

12) Complete deployment verification

After deploying, run through this verification checklist to confirm everything is working correctly. Catching problems immediately after deployment is infinitely cheaper than discovering them from user reports.

bashPost-deployment verification
1# 1. Check PM2 processes are running
2pm2 status
3
4# 2. Check application logs for startup errors
5pm2 logs my-api --lines 20 --nostream
6
7# 3. Test health endpoint
8curl -s https://yourdomain.com/health | jq
9
10# 4. Test API endpoint
11curl -s https://yourdomain.com/api/register \
12 -X POST -H "Content-Type: application/json" \
13 -d '{"email":"test@test.com","password":"test1234","full_name":"Test"}' \
14 | jq
15
16# 5. Check SSL certificate
17echo | openssl s_client -connect yourdomain.com:443 2>/dev/null \
18 | openssl x509 -noout -dates
19
20# 6. Check Nginx status
21sudo systemctl status nginx
22
23# 7. Check server resources
24free -h && df -h
25
26# 8. Verify Certbot renewal timer
27sudo systemctl status certbot.timer

Production deployment architecture overview

Here is how all the pieces fit together in the final architecture. Internet traffic hits your domain, which resolves to your EC2 instance public IP. Nginx listens on ports 80 and 443, terminates SSL, compresses responses, and proxies API requests to your Node.js processes on port 3000. PM2 manages multiple Node.js worker processes in cluster mode, distributing load across all CPU cores. Each worker connects to PostgreSQL through a connection pool. The deployment script orchestrates updates with zero downtime by performing rolling restarts through PM2.

textArchitecture Diagram
1┌─────────────┐ ┌──────────────────────────────────────────┐
2│ Browser │ │ EC2 Instance │
3│ (Client) │ │ │
4└──────┬───────┘ │ ┌────────────────────────────────────┐ │
5 │ │ │ Nginx (:80/:443) │ │
6 │ HTTPS │ │ • SSL termination │ │
7 ├────────────►│ │ • Gzip compression │ │
8 │ │ │ • Static file serving │ │
9 │ │ │ • Reverse proxy → :3000 │ │
10 │ │ └──────────────┬─────────────────────┘ │
11 │ │ │ │
12 │ │ ┌──────────────▼─────────────────────┐ │
13 │ │ │ PM2 (Cluster Mode) │ │
14 │ │ │ │ │
15 │ │ │ ┌──────────┐ ┌──────────┐ │ │
16 │ │ │ │ Worker 1 │ │ Worker 2 │ ... │ │
17 │ │ │ │ (Node.js)│ │ (Node.js)│ │ │
18 │ │ │ └────┬─────┘ └────┬─────┘ │ │
19 │ │ └───────┼──────────────┼────────────┘ │
20 │ └──────────┼──────────────┼───────────────┘
21 │ │ │
22 │ ┌─────────▼──────────────▼────────────────┐
23 │ │ PostgreSQL (RDS / Local) │
24 │ │ Connection Pool (max: 20) │
25 │ └─────────────────────────────────────────┘

Cost and scaling considerations

This setup runs on a single EC2 instance, which keeps costs low for early-stage and mid-traffic applications. A t3.small instance costs approximately $15 per month. Combined with a free-tier RDS instance or a self-hosted PostgreSQL installation, you can run a production API for under $30 per month.

  • Vertical scaling — upgrade instance size (t3.small → t3.medium → t3.large) for more CPU and RAM. No architecture changes needed.
  • Horizontal scaling — add an Application Load Balancer (ALB) in front of multiple EC2 instances. PM2 cluster mode already handles per-instance concurrency.
  • Database scaling — migrate from self-hosted PostgreSQL to Amazon RDS for automated backups, failover, and read replicas.
  • CDN layer — add CloudFront in front of Nginx to cache static assets at edge locations worldwide, reducing latency for global users.
  • Container migration — when complexity justifies it, containerize with Docker and deploy to ECS or EKS for orchestrated scaling.

Common deployment problems and fixes

These are the deployment issues we encounter most frequently on client projects. Each one has a specific cause and a straightforward fix.

  • EACCES permission denied — your Node.js process is trying to bind to a port below 1024. Use a high port (3000+) and proxy through Nginx instead.
  • PM2 processes show errored status — check pm2 logs for the actual error. Usually a missing environment variable or failed database connection.
  • Nginx 502 Bad Gateway — Node.js is not running or is running on a different port than Nginx expects. Verify with curl localhost:3000/health.
  • SSL certificate renewal failing — port 80 must be accessible for Let's Encrypt HTTP challenge. Check security group and UFW rules.
  • deploy.sh fails on git pull — uncommitted changes on the server. The script uses git reset --hard to avoid this, but check for .env file conflicts.
  • High memory usage — Node.js memory leak or missing PM2 max_memory_restart. Set a memory limit so PM2 restarts workers before they exhaust RAM.
  • Connection timeouts after deploy — the old process was killed before the new one was ready. Use pm2 reload (not restart) and ensure listen_timeout is set.

Deployment is not a one-time task — it is an operational capability that improves over time. Start with this guide as your foundation, then iterate: add monitoring dashboards, set up alerting, automate database backups, and build runbooks for common incidents. Every improvement reduces the time and stress of shipping code to production.

At Dude Lemon, we handle production deployments for clients who need their applications running reliably without building an internal DevOps team. If you need help deploying your Node.js application or setting up a production infrastructure on AWS, our cloud engineering team is available for a free architecture review.

The best deployment pipeline is the one your team actually trusts enough to use on a Friday afternoon.

Need help building this?

Let our team build it for you.

Dude Lemon builds production-grade web apps, APIs, and cloud infrastructure. Get a free consultation and project proposal within 48 hours.

Start a Project
← PreviousHow to Build a REST API With Node.js, Express, and PostgreSQLBackend
Next →How to Secure a Node.js Application in ProductionSecurity

In This Article

1) Provision and secure your EC2 instance2) Install Node.js with NVM3) Clone and configure your application4) Configure PM2 for production process management5) Configure Nginx as a reverse proxy6) Install SSL certificates with Let's Encrypt7) Automated deployment script8) Log management and monitoring9) Security hardening checklist10) Performance optimization with Nginx caching11) Setting up CI/CD with GitHub Actions12) Complete deployment verificationProduction deployment architecture overviewCost and scaling considerationsCommon deployment problems and fixes
Need help building this?
Dude LemonDude Lemon

Custom software development.
Built right. Shipped fast.

Start a project
Pages
HomeWorkAboutBlogCareers
Services
Custom Web App DevelopmentMobile App DevelopmentCloud Infrastructure & AI
Connect
[email protected]Schedule Intro CallContact
© 2026 Dude Lemon LLC · Los Angeles, CA
PrivacyTerms