Skip to content

Instantly share code, notes, and snippets.

@YourAKShaw
Created November 19, 2025 16:51
Show Gist options
  • Select an option

  • Save YourAKShaw/b9620462b46b90f99124176f57de48f4 to your computer and use it in GitHub Desktop.

Select an option

Save YourAKShaw/b9620462b46b90f99124176f57de48f4 to your computer and use it in GitHub Desktop.
Complete Deployment Manual: Netlify (Frontend) + AWS (Backend)

Complete Deployment Manual: Netlify (Frontend) + AWS (Backend)

Generic guide for deploying any modern frontend application to Netlify and any backend API to AWS


Table of Contents

  1. Overview
  2. Prerequisites
  3. Architecture Overview
  4. Part 1: AWS Backend Deployment
  5. Part 2: Netlify Frontend Deployment
  6. Database Setup on AWS
  7. Domain Configuration
  8. SSL/HTTPS Setup
  9. Environment Variables Configuration
  10. CORS Configuration
  11. Testing the Deployment
  12. CI/CD Pipeline Setup
  13. Monitoring and Logging
  14. Backup and Disaster Recovery
  15. Performance Optimization
  16. Security Best Practices
  17. Troubleshooting Guide
  18. Cost Optimization Strategies
  19. Scaling Strategies
  20. Migration Checklist

Overview

This comprehensive guide covers deploying any modern web application with a separated architecture:

  • Frontend: Static or server-rendered application deployed to Netlify
  • Backend: RESTful or GraphQL API deployed to AWS
  • Database: Optional database services on AWS
  • Infrastructure: Production-ready setup with SSL, monitoring, and CI/CD

Supported Frameworks

Frontend (Netlify):

  • React (Create React App, Vite)
  • Next.js (Static, SSG, SSR with Edge Functions)
  • Vue.js (Vue CLI, Nuxt.js)
  • Angular
  • Svelte/SvelteKit
  • Static HTML/CSS/JS
  • Gatsby
  • Astro

Backend (AWS):

  • Node.js (Express, NestJS, Koa, Fastify)
  • Python (Flask, Django, FastAPI)
  • Go (Gin, Echo)
  • Ruby (Rails, Sinatra)
  • Java (Spring Boot)
  • .NET Core
  • PHP (Laravel, Symfony)

Why This Architecture?

Benefits:

  • Independent Scaling: Frontend and backend scale separately
  • Global CDN: Netlify provides worldwide content delivery
  • Cost-Effective: Pay only for what you use
  • High Availability: Built-in redundancy and failover
  • Developer Experience: Simple deployment workflows
  • Security: Isolated services, better security boundaries

Use Cases:

  • SaaS applications
  • E-commerce platforms
  • Portfolio/business websites
  • Mobile app backends
  • RESTful/GraphQL APIs
  • Microservices architectures

Prerequisites

Required Accounts

  1. Netlify Account

  2. AWS Account

    • Sign up at aws.amazon.com
    • Credit card required
    • Free tier available for 12 months
    • Enable billing alerts immediately
  3. Git Provider Account

    • GitHub, GitLab, or Bitbucket
    • Repository for your application code
  4. Domain Name (Optional but Recommended)

    • Purchase from GoDaddy, Namecheap, Google Domains, etc.
    • Can also use AWS Route 53

Required Knowledge

Basic:

  • Git version control
  • Command line/terminal usage
  • Environment variables concept
  • HTTP/HTTPS basics
  • DNS basics

Intermediate:

  • Your chosen frontend framework
  • Your chosen backend framework
  • RESTful API concepts
  • Database basics (if using)

Advanced (Optional):

  • Docker containerization
  • Infrastructure as Code
  • AWS IAM and security
  • CI/CD pipelines

Required Tools

Install these tools on your local machine:

# Git (Version Control)
# Download from: https://git-scm.com/downloads
git --version

# Node.js and npm (Even if not using Node backend)
# Download from: https://nodejs.org/
node --version  # v18+ recommended
npm --version

# AWS CLI (Optional but recommended)
# Download from: https://aws.amazon.com/cli/
aws --version

# Docker (Optional, for containerized deployments)
# Download from: https://www.docker.com/
docker --version

# Your framework-specific CLI tools
# Examples:
npm install -g @angular/cli      # Angular
npm install -g create-react-app  # React
npm install -g @vue/cli          # Vue
pip install awsebcli             # Elastic Beanstalk

Required Permissions

AWS IAM Permissions:

  • EC2 (if using Option A or B)
  • Elastic Beanstalk (if using Option B)
  • Lambda, API Gateway (if using Option C)
  • ECS, ECR (if using Option D)
  • RDS (if using database)
  • S3 (for file storage)
  • CloudWatch (for logging)
  • IAM (for creating roles)
  • VPC (for networking)

Recommendation: Create an IAM user with AdministratorAccess for initial setup, then restrict permissions later.


Architecture Overview

High-Level Architecture

┌─────────────────────────────────────────────────────────────────┐
│                         End Users                                │
└──────────────────────┬──────────────────────────────────────────┘
                       │
                       ▼
        ┌──────────────────────────────────┐
        │      DNS (Route 53/GoDaddy)     │
        └──────────────┬───────────────────┘
                       │
                       ├────────────────────┬──────────────────────┐
                       │                    │                      │
                       ▼                    ▼                      ▼
         ┏━━━━━━━━━━━━━━━━━━━━┓  ┏━━━━━━━━━━━━━━━━━━━┓   ┏━━━━━━━━━━━━━┓
         ┃  Netlify CDN      ┃  ┃   AWS Backend     ┃   ┃  Database   ┃
         ┃  (Frontend)       ┃  ┃   (API Server)    ┃   ┃  (AWS RDS)  ┃
         ┗━━━━━━━━━━━━━━━━━━━━┛  ┗━━━━━━━━━━━━━━━━━━━┛   ┗━━━━━━━━━━━━━┛
                       │                    │                      │
                       │    HTTPS API       │      Database        │
                       │    Requests        │      Connection      │
                       └────────────────────┴──────────────────────┘

Detailed Architecture Diagram

┌────────────────────────────────────────────────────────────────────┐
│                          USER BROWSER                               │
└───────────────────────────┬────────────────────────────────────────┘
                            │ HTTPS
                            ▼
┌────────────────────────────────────────────────────────────────────┐
│                      NETLIFY CDN (Global)                           │
│  ┌──────────────────────────────────────────────────────────────┐  │
│  │                   Frontend Application                        │  │
│  │  • Static Assets (HTML, CSS, JS, Images)                     │  │
│  │  • Build Output (Webpack/Vite/etc.)                          │  │
│  │  • Edge Functions (Optional)                                 │  │
│  │  • Form Handlers (Optional)                                  │  │
│  │  • Serverless Functions (Optional)                           │  │
│  └──────────────────────────────────────────────────────────────┘  │
│                                                                      │
│  Features:                                                           │
│  • Automatic SSL/TLS                                                │
│  • Global CDN (200+ locations)                                      │
│  • Instant rollback                                                 │
│  • Deploy previews                                                  │
│  • Branch deploys                                                   │
└───────────────────────────┬────────────────────────────────────────┘
                            │ HTTPS API Calls
                            ▼
┌────────────────────────────────────────────────────────────────────┐
│                      AWS CLOUD (Region)                             │
│                                                                      │
│  ┌─────────────────────────────────────────────────────────────┐   │
│  │              Application Load Balancer (Optional)            │   │
│  │  • SSL Termination                                           │   │
│  │  • Health Checks                                             │   │
│  │  • Traffic Distribution                                      │   │
│  └────────────────────┬────────────────────────────────────────┘   │
│                       │                                              │
│  ┌────────────────────┴────────────────────────────────────────┐   │
│  │                    Backend Application                       │   │
│  │                                                              │   │
│  │  Option A: EC2 Instances                                    │   │
│  │  ┌────────────────────────────────────────────┐             │   │
│  │  │ • Ubuntu/Amazon Linux Server               │             │   │
│  │  │ • PM2/Systemd Process Manager              │             │   │
│  │  │ • Nginx Reverse Proxy                      │             │   │
│  │  │ • Application Code                         │             │   │
│  │  └────────────────────────────────────────────┘             │   │
│  │                                                              │   │
│  │  Option B: Elastic Beanstalk                                │   │
│  │  ┌────────────────────────────────────────────┐             │   │
│  │  │ • Managed EC2 Instances                    │             │   │
│  │  │ • Auto Scaling Groups                      │             │   │
│  │  │ • Load Balancer                            │             │   │
│  │  │ • Monitoring & Health Checks               │             │   │
│  │  └────────────────────────────────────────────┘             │   │
│  │                                                              │   │
│  │  Option C: Lambda Functions                                 │   │
│  │  ┌────────────────────────────────────────────┐             │   │
│  │  │ • Serverless Functions                     │             │   │
│  │  │ • API Gateway Integration                  │             │   │
│  │  │ • Auto Scaling                             │             │   │
│  │  │ • Pay-per-Request                          │             │   │
│  │  └────────────────────────────────────────────┘             │   │
│  │                                                              │   │
│  │  Option D: ECS Fargate                                      │   │
│  │  ┌────────────────────────────────────────────┐             │   │
│  │  │ • Containerized Application                │             │   │
│  │  │ • Docker Images (ECR)                      │             │   │
│  │  │ • Task Definitions                         │             │   │
│  │  │ • Service Auto Scaling                     │             │   │
│  │  └────────────────────────────────────────────┘             │   │
│  └──────────────────────┬──────────────────────────────────────┘   │
│                         │                                            │
│  ┌──────────────────────┴──────────────────────────────────────┐   │
│  │                  Data Layer                                  │   │
│  │                                                              │   │
│  │  Database Options:                                          │   │
│  │  ┌────────────────────────────────────────────┐             │   │
│  │  │ • RDS (PostgreSQL, MySQL, etc.)            │             │   │
│  │  │ • DynamoDB (NoSQL)                         │             │   │
│  │  │ • DocumentDB (MongoDB compatible)          │             │   │
│  │  │ • ElastiCache (Redis/Memcached)            │             │   │
│  │  │ • Aurora (Serverless SQL)                  │             │   │
│  │  └────────────────────────────────────────────┘             │   │
│  │                                                              │   │
│  │  Storage Options:                                           │   │
│  │  ┌────────────────────────────────────────────┐             │   │
│  │  │ • S3 (Object Storage)                      │             │   │
│  │  │ • EFS (File System)                        │             │   │
│  │  │ • EBS (Block Storage)                      │             │   │
│  │  └────────────────────────────────────────────┘             │   │
│  └──────────────────────────────────────────────────────────────┘   │
│                                                                      │
│  ┌──────────────────────────────────────────────────────────────┐   │
│  │                Supporting Services                            │   │
│  │  • CloudWatch (Logs & Metrics)                               │   │
│  │  • CloudFront (Optional CDN for API)                         │   │
│  │  • SES (Email Service)                                       │   │
│  │  • SNS/SQS (Messaging)                                       │   │
│  │  • Secrets Manager (Credentials)                             │   │
│  │  • WAF (Web Application Firewall)                            │   │
│  └──────────────────────────────────────────────────────────────┘   │
└────────────────────────────────────────────────────────────────────┘

Data Flow

  1. User Request

    • User accesses https://yourdomain.com
    • DNS resolves to Netlify CDN
  2. Frontend Delivery

    • Netlify serves static assets from nearest edge location
    • Assets cached globally for fast delivery
    • User interacts with frontend application
  3. API Communication

    • Frontend makes HTTPS requests to https://api.yourdomain.com
    • Requests routed to AWS backend
    • Backend processes request
  4. Data Operations

    • Backend queries database (if needed)
    • Backend processes business logic
    • Response sent back to frontend
  5. Response to User

    • Frontend receives data
    • UI updates
    • User sees results

Part 1: AWS Backend Deployment

Choose one of the following deployment options based on your requirements:

Decision Matrix: Which AWS Option to Choose?

Criteria EC2 Elastic Beanstalk Lambda ECS Fargate
Best For Full control Easy management Event-driven Containers
Complexity High Medium Low-Medium High
Scalability Manual/Auto Auto Automatic Auto
Cost (Low Traffic) $$$ $$$ $ $$$
Cost (High Traffic) $$ $$ $$$ $$
Setup Time 2-4 hours 1-2 hours 1-2 hours 3-5 hours
Maintenance High Low Minimal Medium
Cold Starts None None Yes Minimal
Request Timeout Unlimited Unlimited 15 min Unlimited
Language Support All Most All All
Learning Curve Medium Low Medium High

Recommendations:

  • Choose EC2 if:

    • You need full control over server
    • Running long-running processes
    • Need custom software/configurations
    • Budget allows dedicated server
    • Want to minimize costs for high traffic
  • Choose Elastic Beanstalk if:

    • You want easy deployment
    • Need auto-scaling without complexity
    • Using supported platforms (Node, Python, etc.)
    • Want AWS to manage infrastructure
    • Team lacks DevOps expertise
  • Choose Lambda if:

    • Building API with sporadic traffic
    • Want pay-per-request pricing
    • Need automatic scaling
    • Requests complete in < 15 minutes
    • Building microservices
  • Choose ECS Fargate if:

    • Already using Docker
    • Need container orchestration
    • Want serverless containers
    • Running microservices
    • Need complex deployment requirements

Option A: AWS EC2 (Virtual Server)

Best for: Full control, predictable traffic, custom configurations

A1. Launch EC2 Instance

A1.1. Access AWS Console

  1. Log in to AWS Console
  2. Select your preferred region (top-right corner)
    • Recommendation: Use region closest to your users
    • Popular: us-east-1 (N. Virginia), eu-west-1 (Ireland), ap-south-1 (Mumbai)
  3. Navigate to EC2 service (search bar or Services menu)

A1.2. Launch Instance

Click "Launch Instance" button and configure:

1. Name and Tags:

Name: my-backend-server
Tags (Optional):
  Environment: production
  Application: my-app-backend
  ManagedBy: manual

2. Application and OS Images (AMI):

Choose operating system:

  • Ubuntu Server 22.04 LTS (Recommended for most)

    • Free tier eligible
    • Large community support
    • Easy package management
  • Amazon Linux 2023 (AWS-optimized)

    • Optimized for AWS
    • Pre-installed AWS tools
    • Long-term support
  • Other Options:

    • Debian 11/12
    • CentOS Stream
    • Red Hat Enterprise Linux
    • Windows Server (for .NET apps)

3. Instance Type:

Type vCPUs RAM Use Case Monthly Cost*
t2.micro 1 1 GB Free tier, dev/test $0 (first year) then ~$8
t3.micro 2 1 GB Small apps, low traffic ~$7
t3.small 2 2 GB Production (small) ~$15
t3.medium 2 4 GB Production (medium) ~$30
t3.large 2 8 GB Production (high traffic) ~$60
c5.large 2 4 GB CPU-intensive ~$62
r5.large 2 16 GB Memory-intensive ~$96

*Prices approximate for us-east-1 region

Recommendation: Start with t3.small for production, can upgrade later

4. Key Pair (Login):

  • Click "Create new key pair"
  • Name: my-backend-key
  • Key pair type: RSA
  • Private key file format: .pem (for Linux/Mac) or .ppk (for Windows PuTTY)
  • Click "Create key pair" - Downloads automatically
  • ⚠️ CRITICAL: Save this file securely! Cannot download again

On Linux/Mac, secure the key:

chmod 400 ~/Downloads/my-backend-key.pem
mv ~/Downloads/my-backend-key.pem ~/.ssh/

5. Network Settings:

Click "Edit" to customize:

VPC: Default (or create new)
Subnet: No preference (or choose specific)
Auto-assign public IP: Enable

Security Group:
  Name: my-backend-sg
  Description: Security group for my backend server

Inbound Rules:
  1. SSH
     - Type: SSH
     - Protocol: TCP
     - Port: 22
     - Source: My IP (your current IP)
     - Description: SSH access from my location

  2. HTTP
     - Type: HTTP
     - Protocol: TCP
     - Port: 80
     - Source: 0.0.0.0/0, ::/0
     - Description: Public HTTP access

  3. HTTPS
     - Type: HTTPS
     - Protocol: TCP
     - Port: 443
     - Source: 0.0.0.0/0, ::/0
     - Description: Public HTTPS access

  4. Custom Application Port (if needed)
     - Type: Custom TCP
     - Protocol: TCP
     - Port: 3000 (or your app port)
     - Source: 0.0.0.0/0 or My IP (for testing)
     - Description: Application port

⚠️ Security Notes:

  • Restrict SSH to your IP only
  • Change SSH port from 22 (optional security measure)
  • Never use 0.0.0.0/0 for SSH in production
  • Use VPN for SSH access in production environments

6. Configure Storage:

Volume 1 (Root):
  Size: 20-30 GB (minimum)
  Volume Type: gp3 (General Purpose SSD)
  IOPS: 3000 (default)
  Throughput: 125 MB/s (default)
  Delete on Termination: Yes (for dev), No (for production)
  Encrypted: Yes (recommended for production)

Storage Guidelines:

  • Development: 20 GB sufficient
  • Production: 30-50 GB recommended
  • Database on same server: Add 100+ GB
  • File uploads: Consider separate EBS volume

7. Advanced Details (Optional but Recommended):

User Data (Bootstrap script - runs on first launch):

For Ubuntu with Node.js backend:

#!/bin/bash
# Update system
apt update && apt upgrade -y

# Install Node.js 20.x
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt install -y nodejs

# Install essential tools
apt install -y git nginx certbot python3-certbot-nginx

# Install PM2 globally
npm install -g pm2

# Create application directory
mkdir -p /var/www/backend
chown ubuntu:ubuntu /var/www/backend

# Configure firewall
ufw allow OpenSSH
ufw allow 'Nginx Full'
ufw --force enable

echo "Setup complete!" > /var/log/user-data.log

For Python backend:

#!/bin/bash
apt update && apt upgrade -y
apt install -y python3 python3-pip python3-venv git nginx
pip3 install gunicorn
mkdir -p /var/www/backend
chown ubuntu:ubuntu /var/www/backend

IAM Instance Profile (for AWS service access):

  • Create IAM role with policies:
    • AmazonS3ReadOnlyAccess (if accessing S3)
    • CloudWatchAgentServerPolicy (for monitoring)
    • AmazonSSMManagedInstanceCore (for Systems Manager)

8. Summary:

Review all settings, then click "Launch Instance"

Wait 2-5 minutes for instance to start. Status should show "Running" with 2/2 status checks passed.

A1.3. Allocate Elastic IP (Recommended)

Elastic IP ensures your backend URL doesn't change when instance stops/restarts.

  1. In EC2 Console, go to "Elastic IPs" (left sidebar under Network & Security)
  2. Click "Allocate Elastic IP address"
  3. Settings:
    Network Border Group: Use default
    Public IPv4 address pool: Amazon's pool of IPv4 addresses
    Tags (Optional):
      Name: my-backend-eip
      Environment: production
    
  4. Click "Allocate"
  5. Select the new Elastic IP
  6. Actions"Associate Elastic IP address"
  7. Settings:
    Resource type: Instance
    Instance: Select your instance (my-backend-server)
    Private IP address: (Auto-selected)
    
  8. Click "Associate"

Note: Elastic IPs are free while associated with a running instance. If instance is stopped, you're charged ~$0.005/hour.

Record your Elastic IP (e.g., 54.123.45.67) - this is your backend URL.

A2. Connect to EC2 Instance

A2.1. Connection Methods

Method 1: SSH (Linux/Mac)

# Connect using your key file
ssh -i ~/.ssh/my-backend-key.pem ubuntu@YOUR_ELASTIC_IP

# Example:
ssh -i ~/.ssh/my-backend-key.pem [email protected]

Method 2: SSH (Windows - PowerShell)

# Connect using key file
ssh -i C:\Users\YourName\.ssh\my-backend-key.pem ubuntu@YOUR_ELASTIC_IP

Method 3: PuTTY (Windows)

  1. Convert .pem to .ppk using PuTTYgen
  2. Open PuTTY
  3. Enter host: ubuntu@YOUR_ELASTIC_IP
  4. Connection → SSH → Auth → Browse for .ppk file
  5. Click "Open"

Method 4: EC2 Instance Connect (Browser-based)

  1. In EC2 Console, select your instance
  2. Click "Connect" button
  3. Choose "EC2 Instance Connect" tab
  4. Click "Connect"
  5. Browser terminal opens

Default Usernames by AMI:

  • Ubuntu: ubuntu
  • Amazon Linux: ec2-user
  • CentOS: centos
  • Debian: admin
  • RHEL: ec2-user

A2.2. First-Time Connection

Upon first connection:

# You'll see warning about host authenticity
The authenticity of host 'X.X.X.X (X.X.X.X)' can't be established.
ECDSA key fingerprint is SHA256:xxxxx.
Are you sure you want to continue connecting (yes/no)?

# Type: yes

# You're now connected!
ubuntu@ip-172-31-XX-XX:~$

A3. Server Setup and Configuration

A3.1. Update System

# Update package lists
sudo apt update

# Upgrade installed packages
sudo apt upgrade -y

# Install essential build tools
sudo apt install -y build-essential curl wget git unzip

A3.2. Install Runtime Environment

For Node.js Backend:

# Install Node.js 20.x (LTS)
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs

# Verify installation
node --version   # Should show v20.x.x
npm --version    # Should show 10.x.x

# Install Yarn (optional)
sudo npm install -g yarn

# Install PM2 (Process Manager)
sudo npm install -g pm2
pm2 --version

For Python Backend:

# Install Python 3.11
sudo apt install -y python3.11 python3.11-venv python3-pip

# Verify installation
python3 --version
pip3 --version

# Install virtualenv
sudo pip3 install virtualenv

# Install production server
sudo pip3 install gunicorn
gunicorn --version

For Go Backend:

# Download and install Go
cd /tmp
wget https://go.dev/dl/go1.21.5.linux-amd64.tar.gz
sudo rm -rf /usr/local/go
sudo tar -C /usr/local -xzf go1.21.5.linux-amd64.tar.gz

# Add to PATH
echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bashrc
source ~/.bashrc

# Verify
go version

For Java Backend:

# Install Java JDK 17
sudo apt install -y openjdk-17-jdk

# Verify
java -version
javac -version

For PHP Backend:

# Install PHP 8.2
sudo apt install -y php8.2 php8.2-fpm php8.2-cli php8.2-common php8.2-mysql php8.2-zip php8.2-gd php8.2-mbstring php8.2-curl php8.2-xml php8.2-bcmath

# Verify
php -v

# Install Composer
curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer
composer --version

A3.3. Install Nginx (Reverse Proxy)

# Install Nginx
sudo apt install -y nginx

# Verify installation
nginx -v

# Start Nginx
sudo systemctl start nginx
sudo systemctl enable nginx

# Check status
sudo systemctl status nginx

Test: Visit http://YOUR_ELASTIC_IP in browser - you should see Nginx welcome page.

A3.4. Install SSL Certificate Tools

# Install Certbot (Let's Encrypt)
sudo apt install -y certbot python3-certbot-nginx

# Verify installation
certbot --version

A3.5. Configure Firewall (UFW)

# Check firewall status
sudo ufw status

# Allow essential services
sudo ufw allow OpenSSH
sudo ufw allow 'Nginx Full'

# Enable firewall
sudo ufw enable

# Verify rules
sudo ufw status verbose

Output should show:

Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
Nginx Full                 ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
Nginx Full (v6)            ALLOW       Anywhere (v6)

A4. Deploy Backend Application

A4.1. Create Application Directory

# Create directory structure
sudo mkdir -p /var/www/backend
sudo chown -R $USER:$USER /var/www/backend
cd /var/www/backend

A4.2. Clone Repository (Option 1: Git)

# Clone your repository
git clone https://github.com/your-username/your-backend-repo.git .

# Or if using private repository, set up SSH key first:
ssh-keygen -t ed25519 -C "[email protected]"
cat ~/.ssh/id_ed25519.pub  # Add this to GitHub SSH keys

# Then clone
git clone [email protected]:your-username/your-backend-repo.git .

A4.3. Deploy via SCP/SFTP (Option 2: Manual Upload)

From your local machine:

# Upload using SCP
scp -i ~/.ssh/my-backend-key.pem -r /path/to/backend ubuntu@YOUR_ELASTIC_IP:/var/www/backend

# Or use SFTP
sftp -i ~/.ssh/my-backend-key.pem ubuntu@YOUR_ELASTIC_IP
sftp> cd /var/www/backend
sftp> put -r /path/to/backend/*
sftp> exit

A4.4. Install Dependencies and Build

Node.js:

cd /var/www/backend

# Install production dependencies
npm install --production

# Or if you need dev dependencies for build:
npm install
npm run build  # If using TypeScript or build step
npm prune --production  # Remove dev dependencies after build

# For TypeScript projects:
npm install -g typescript
tsc  # Compile TypeScript

Python:

cd /var/www/backend

# Create virtual environment
python3 -m venv venv

# Activate virtual environment
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Deactivate (when done)
deactivate

Go:

cd /var/www/backend

# Install dependencies
go mod download

# Build binary
go build -o app main.go

# Or build optimized:
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .

A5. Configure Environment Variables

A5.1. Create .env File

# Navigate to app directory
cd /var/www/backend

# Create .env file
nano .env

Add your environment variables (example):

# Application
NODE_ENV=production
PORT=3000
APP_NAME=MyBackendAPI
APP_VERSION=1.0.0

# Database
DB_HOST=your-rds-endpoint.amazonaws.com
DB_PORT=5432
DB_NAME=myapp_prod
DB_USERNAME=dbadmin
DB_PASSWORD=your_secure_password_here

# Redis/Cache
REDIS_HOST=your-redis-endpoint.amazonaws.com
REDIS_PORT=6379
REDIS_PASSWORD=your_redis_password

# Authentication
JWT_SECRET=your_super_secret_jwt_key_here_min_32_chars
JWT_EXPIRES_IN=7d
SESSION_SECRET=your_session_secret_here

# Email Service
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=[email protected]
SMTP_PASSWORD=your_app_password
EMAIL_FROM=[email protected]

# AWS Services
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=your_secret_key
S3_BUCKET=my-app-uploads

# Frontend URL (for CORS)
FRONTEND_URL=https://yourdomain.com
CORS_ORIGINS=https://yourdomain.com,https://www.yourdomain.com

# API Configuration
API_RATE_LIMIT=100
API_TIMEOUT=30000

# Logging
LOG_LEVEL=info
LOG_FILE=/var/log/backend/app.log

# Third-party APIs
STRIPE_SECRET_KEY=sk_live_...
STRIPE_WEBHOOK_SECRET=whsec_...
GOOGLE_CLIENT_ID=your_google_client_id
GOOGLE_CLIENT_SECRET=your_google_client_secret

Save file (Ctrl+X, Y, Enter)

A5.2. Secure .env File

# Set proper permissions
chmod 600 .env

# Ensure only your user can read
ls -la .env
# Should show: -rw------- 1 ubuntu ubuntu

A5.3. Environment-Specific Configurations

For multiple environments:

# Create environment-specific files
.env.production
.env.staging
.env.development

# Load appropriate file in your app
# Node.js example using dotenv:
require('dotenv').config({ path: `.env.${process.env.NODE_ENV}` });

A6. Configure Process Manager

A6.1. PM2 (Node.js Applications)

Create PM2 Ecosystem File:

cd /var/www/backend
nano ecosystem.config.js

Add configuration:

module.exports = {
  apps: [{
    name: 'backend-api',
    script: 'dist/main.js',  // Or 'server.js', 'app.js', etc.
    instances: 2,  // Number of instances (or 'max' for all CPU cores)
    exec_mode: 'cluster',  // 'cluster' or 'fork'
    max_memory_restart: '500M',
    env: {
      NODE_ENV: 'production',
      PORT: 3000
    },
    error_file: '/var/log/backend/error.log',
    out_file: '/var/log/backend/out.log',
    log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
    merge_logs: true,
    autorestart: true,
    watch: false,
    max_restarts: 10,
    min_uptime: '10s'
  }]
};

Create log directory:

sudo mkdir -p /var/log/backend
sudo chown ubuntu:ubuntu /var/log/backend

Start application with PM2:

# Start using ecosystem file
pm2 start ecosystem.config.js

# Or start directly
pm2 start dist/main.js --name backend-api -i 2

# View status
pm2 status

# View logs
pm2 logs backend-api

# Monitor
pm2 monit

Setup PM2 Startup:

# Generate startup script
pm2 startup systemd

# This outputs a command - copy and run it
# Example: sudo env PATH=$PATH:/usr/bin pm2 startup systemd -u ubuntu --hp /home/ubuntu

# Save current PM2 process list
pm2 save

# Verify auto-start
sudo systemctl status pm2-ubuntu

Useful PM2 Commands:

# Restart app
pm2 restart backend-api

# Stop app
pm2 stop backend-api

# Delete app from PM2
pm2 delete backend-api

# Reload (zero-downtime restart)
pm2 reload backend-api

# View detailed info
pm2 show backend-api

# Clear logs
pm2 flush

A6.2. Systemd (Python/Go/Generic)

Create systemd service file:

sudo nano /etc/systemd/system/backend.service

For Python (Gunicorn):

[Unit]
Description=Backend API Server
After=network.target

[Service]
Type=notify
User=ubuntu
Group=ubuntu
WorkingDirectory=/var/www/backend
Environment="PATH=/var/www/backend/venv/bin"
EnvironmentFile=/var/www/backend/.env
ExecStart=/var/www/backend/venv/bin/gunicorn \
    --workers 4 \
    --bind 127.0.0.1:3000 \
    --timeout 120 \
    --access-logfile /var/log/backend/access.log \
    --error-logfile /var/log/backend/error.log \
    wsgi:app

Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

For Go:

[Unit]
Description=Backend API Server
After=network.target

[Service]
Type=simple
User=ubuntu
Group=ubuntu
WorkingDirectory=/var/www/backend
EnvironmentFile=/var/www/backend/.env
ExecStart=/var/www/backend/app
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Enable and start service:

# Reload systemd
sudo systemctl daemon-reload

# Enable service (start on boot)
sudo systemctl enable backend

# Start service
sudo systemctl start backend

# Check status
sudo systemctl status backend

# View logs
sudo journalctl -u backend -f

Useful systemd commands:

# Restart service
sudo systemctl restart backend

# Stop service
sudo systemctl stop backend

# View logs (last 100 lines)
sudo journalctl -u backend -n 100

# View logs (follow)
sudo journalctl -u backend -f

# Clear old logs
sudo journalctl --vacuum-time=7d

A7. Configure Nginx Reverse Proxy

A7.1. Create Nginx Site Configuration

# Create configuration file
sudo nano /etc/nginx/sites-available/backend

Basic Configuration:

# Upstream backend servers
upstream backend_servers {
    least_conn;  # Load balancing method
    server 127.0.0.1:3000 max_fails=3 fail_timeout=30s;
    # Add more servers if running multiple instances:
    # server 127.0.0.1:3001 max_fails=3 fail_timeout=30s;
    keepalive 32;
}

# HTTP Server (Port 80)
server {
    listen 80;
    listen [::]:80;
    server_name api.yourdomain.com;  # Replace with your domain

    # Redirect all HTTP to HTTPS
    return 301 https://$server_name$request_uri;
}

# HTTPS Server (Port 443) - Will be configured after SSL setup
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name api.yourdomain.com;  # Replace with your domain

    # SSL certificates (will be added by Certbot)
    # ssl_certificate /etc/letsencrypt/live/api.yourdomain.com/fullchain.pem;
    # ssl_certificate_key /etc/letsencrypt/live/api.yourdomain.com/privkey.pem;

    # SSL configuration
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;

    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;

    # Client body size limit (for file uploads)
    client_max_body_size 10M;

    # Logging
    access_log /var/log/nginx/backend_access.log;
    error_log /var/log/nginx/backend_error.log;

    # Root location - proxy to backend
    location / {
        # Proxy headers
        proxy_pass http://backend_servers;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        
        # Timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
        
        # Buffering
        proxy_buffering on;
        proxy_buffer_size 4k;
        proxy_buffers 8 4k;
        proxy_busy_buffers_size 8k;
        
        # Cache bypass
        proxy_cache_bypass $http_upgrade;
    }

    # Health check endpoint (no logging)
    location /health {
        proxy_pass http://backend_servers/health;
        access_log off;
    }

    # WebSocket support (if needed)
    location /ws {
        proxy_pass http://backend_servers;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_read_timeout 86400;  # 24 hours
    }

    # Static files (if serving from backend)
    location /static/ {
        alias /var/www/backend/static/;
        expires 30d;
        add_header Cache-Control "public, immutable";
    }

    # Deny access to sensitive files
    location ~ /\.env {
        deny all;
        return 404;
    }
}

For IP-based access (development/testing):

server {
    listen 80;
    server_name YOUR_ELASTIC_IP;  # Or underscore _ for any IP

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

A7.2. Enable Site and Test Configuration

# Create symbolic link to enable site
sudo ln -s /etc/nginx/sites-available/backend /etc/nginx/sites-enabled/

# Remove default site (optional)
sudo rm /etc/nginx/sites-enabled/default

# Test Nginx configuration
sudo nginx -t

# If test passes, reload Nginx
sudo systemctl reload nginx

# Check Nginx status
sudo systemctl status nginx

Expected output from nginx -t:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

A7.3. Configure Nginx Performance (Optional)

Edit main Nginx config:

sudo nano /etc/nginx/nginx.conf

Optimize these settings:

user www-data;
worker_processes auto;  # Auto-detect CPU cores
worker_rlimit_nofile 65535;

events {
    worker_connections 2048;
    use epoll;
    multi_accept on;
}

http {
    # Basic settings
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    server_tokens off;  # Hide Nginx version

    # Buffer sizes
    client_body_buffer_size 10K;
    client_header_buffer_size 1k;
    client_max_body_size 10M;
    large_client_header_buffers 2 1k;

    # Timeouts
    client_body_timeout 12;
    client_header_timeout 12;
    send_timeout 10;

    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types text/plain text/css text/xml text/javascript 
               application/json application/javascript application/xml+rss 
               application/rss+xml font/truetype font/opentype 
               application/vnd.ms-fontobject image/svg+xml;
    gzip_disable "msie6";

    # Rate limiting (optional)
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
    limit_req_status 429;

    # Include site configurations
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Test and reload:

sudo nginx -t
sudo systemctl reload nginx

A8. SSL Certificate Setup (Let's Encrypt)

A8.1. Prerequisites

Before obtaining SSL certificate:

  1. Domain DNS must be configured (see Domain Configuration section)
  2. Nginx must be running and serving site on port 80
  3. Firewall must allow ports 80 and 443

Verify domain resolves:

nslookup api.yourdomain.com
# Should return your Elastic IP

ping api.yourdomain.com
# Should ping your server

A8.2. Obtain SSL Certificate

Automatic method (recommended):

# Run Certbot with Nginx plugin
sudo certbot --nginx -d api.yourdomain.com

# For multiple domains/subdomains:
sudo certbot --nginx -d api.yourdomain.com -d www.api.yourdomain.com

Follow the prompts:

Enter email address (for urgent renewal and security notices): [email protected]
Agree to Terms of Service: Yes (A)
Share email with EFF: No (N)
Redirect HTTP to HTTPS: Yes (2)  # Recommended

Certbot will:

  1. Obtain certificate from Let's Encrypt
  2. Modify your Nginx configuration
  3. Enable HTTPS
  4. Set up automatic renewal

Manual method:

# Obtain certificate only (no auto-configuration)
sudo certbot certonly --nginx -d api.yourdomain.com

# Certificate files will be saved to:
# /etc/letsencrypt/live/api.yourdomain.com/fullchain.pem
# /etc/letsencrypt/live/api.yourdomain.com/privkey.pem

Then manually update Nginx config with SSL settings (already in template above).

A8.3. Verify SSL Certificate

# Check certificate details
sudo certbot certificates

# Test SSL configuration
curl https://api.yourdomain.com/health

# Test from browser or SSL checker
# https://www.ssllabs.com/ssltest/

A8.4. Setup Automatic Renewal

Certbot installs a systemd timer for automatic renewal:

# Check renewal timer status
sudo systemctl status certbot.timer

# Test renewal process (dry run)
sudo certbot renew --dry-run

# Manual renewal (if needed)
sudo certbot renew

# View renewal logs
sudo cat /var/log/letsencrypt/letsencrypt.log

Certificates auto-renew when they have 30 days or less before expiration.

A9. Verify Deployment

A9.1. Test Backend Directly

# Test from server
curl http://localhost:3000/health
curl http://127.0.0.1:3000/health

# Test through Nginx
curl http://YOUR_ELASTIC_IP/health

# Test with HTTPS (if SSL configured)
curl https://api.yourdomain.com/health

A9.2. Test from External Location

From your local machine:

# HTTP (should redirect to HTTPS)
curl http://api.yourdomain.com/health

# HTTPS
curl https://api.yourdomain.com/health

# Test specific endpoint
curl https://api.yourdomain.com/api/users

# POST request
curl -X POST https://api.yourdomain.com/api/login \
  -H "Content-Type: application/json" \
  -d '{"email":"[email protected]","password":"password"}'

A9.3. Monitor Logs

# PM2 logs (Node.js)
pm2 logs backend-api

# Systemd logs (Python/Go)
sudo journalctl -u backend -f

# Nginx access logs
sudo tail -f /var/log/nginx/backend_access.log

# Nginx error logs
sudo tail -f /var/log/nginx/backend_error.log

# Application logs (if using file logging)
tail -f /var/log/backend/app.log

A9.4. Check Resource Usage

# CPU and Memory
htop  # or top

# Disk usage
df -h

# Network connections
sudo netstat -tuln

# Process info (Node.js)
pm2 status
pm2 monit

# Process info (systemd)
sudo systemctl status backend

Option B: AWS Elastic Beanstalk (Managed Platform)

Best for: Easy deployment, automatic scaling, managed infrastructure

B1. Install Elastic Beanstalk CLI

On Linux/Mac:

# Using pip
pip3 install awsebcli --upgrade --user

# Add to PATH (add to ~/.bashrc or ~/.zshrc)
export PATH=$PATH:~/.local/bin

# Reload shell
source ~/.bashrc  # or source ~/.zshrc

# Verify installation
eb --version

On Windows:

# Using pip
pip install awsebcli --upgrade --user

# Add to PATH (via System Environment Variables)
# or use full path when running eb commands

# Verify installation
eb --version

B2. Configure AWS Credentials

# Configure AWS CLI
aws configure

# Enter:
# AWS Access Key ID: AKIA...
# AWS Secret Access Key: ...
# Default region: us-east-1
# Default output format: json

# Verify configuration
aws sts get-caller-identity

B3. Prepare Application for Elastic Beanstalk

B3.1. Navigate to Backend Directory

cd /path/to/your/backend

B3.2. Create Platform-Specific Configuration

For Node.js:

Create .ebextensions/01-node-settings.config:

option_settings:
  aws:elasticbeanstalk:container:nodejs:
    NodeCommand: "npm start"
    NodeVersion: 20.10.0
  aws:elasticbeanstalk:application:environment:
    NODE_ENV: production
    NPM_USE_PRODUCTION: true

For Python:

Create .ebextensions/01-python-settings.config:

option_settings:
  aws:elasticbeanstalk:container:python:
    WSGIPath: application:app
  aws:elasticbeanstalk:application:environment:
    PYTHONPATH: "/var/app/current:$PYTHONPATH"

For Docker:

Create Dockerfile and Dockerrun.aws.json

B3.3. Configure Auto Scaling

Create .ebextensions/02-autoscaling.config:

option_settings:
  aws:autoscaling:asg:
    MinSize: 1
    MaxSize: 4
    Cooldown: 360
  aws:autoscaling:trigger:
    MeasureName: CPUUtilization
    Statistic: Average
    Unit: Percent
    UpperThreshold: 70
    UpperBreachScaleIncrement: 1
    LowerThreshold: 30
    LowerBreachScaleIncrement: -1
    BreachDuration: 5
    Period: 5

B3.4. Configure Load Balancer

Create .ebextensions/03-loadbalancer.config:

option_settings:
  aws:elasticbeanstalk:environment:
    EnvironmentType: LoadBalanced
    LoadBalancerType: application
  aws:elbv2:listener:default:
    ListenerEnabled: true
    Protocol: HTTP
  aws:elbv2:listener:443:
    ListenerEnabled: true
    Protocol: HTTPS
    SSLCertificateArns: arn:aws:acm:us-east-1:123456789012:certificate/xxxxx
    SSLPolicy: ELBSecurityPolicy-2016-08

B4. Initialize and Deploy

B4.1. Initialize Elastic Beanstalk Application

# Initialize EB in your project directory
eb init

# Follow prompts:
# Select region: Choose your preferred region (e.g., us-east-1)
# Select application: Create new Application
# Application name: my-backend-api
# Platform: Choose your platform (Node.js, Python, etc.)
# Platform version: Latest recommended version
# SSH: Yes
# Key pair: Select existing or create new

This creates .elasticbeanstalk/config.yml:

branch-defaults:
  main:
    environment: my-backend-prod
global:
  application_name: my-backend-api
  default_ec2_keyname: my-backend-key
  default_platform: Node.js 20 running on 64bit Amazon Linux 2023
  default_region: us-east-1
  sc: git

B4.2. Configure Environment Variables

# Set environment variables for EB environment
eb setenv \
  NODE_ENV=production \
  DB_HOST=your-db-host \
  DB_PORT=5432 \
  DB_NAME=myapp \
  DB_USERNAME=dbuser \
  DB_PASSWORD=your_password \
  JWT_SECRET=your_jwt_secret \
  FRONTEND_URL=https://yourdomain.com \
  AWS_REGION=us-east-1

Or create .ebextensions/04-environment.config:

option_settings:
  aws:elasticbeanstalk:application:environment:
    NODE_ENV: production
    DB_HOST: your-db-host
    DB_PORT: 5432
    DB_NAME: myapp
    FRONTEND_URL: https://yourdomain.com

⚠️ Note: Don't commit sensitive credentials to git. Use AWS Secrets Manager or environment-specific configuration.

B4.3. Create Environment

# Create production environment
eb create my-backend-prod \
  --instance-type t3.small \
  --min-instances 1 \
  --max-instances 4 \
  --envvars NODE_ENV=production

# This will:
# - Create EC2 instances
# - Setup load balancer
# - Configure security groups
# - Deploy your application
# - Provide a URL: my-backend-prod.eba-xxxxx.us-east-1.elasticbeanstalk.com

Wait 5-10 minutes for environment creation. Monitor progress:

# Check environment status
eb status

# View events
eb events -f

# View logs
eb logs

B4.4. Deploy Application

# Deploy current code
eb deploy

# Deploy specific environment
eb deploy my-backend-prod

# Deploy and open in browser
eb deploy && eb open

B5. Configure Custom Domain and SSL

B5.1. Request SSL Certificate in ACM

# Request certificate via AWS Console or CLI
aws acm request-certificate \
  --domain-name api.yourdomain.com \
  --validation-method DNS \
  --region us-east-1

# Note the CertificateArn from output

B5.2. Validate Certificate

  1. Go to AWS Console → Certificate Manager
  2. Click on your certificate
  3. Click "Create records in Route 53" (or manually add DNS records)
  4. Wait for validation (5-30 minutes)

B5.3. Configure HTTPS Listener

# Add HTTPS listener with SSL certificate
eb setenv SSL_CERTIFICATE_ARN=arn:aws:acm:us-east-1:123456789012:certificate/xxxxx

Or update .ebextensions/03-loadbalancer.config with certificate ARN.

Redeploy:

eb deploy

B5.4. Setup Custom Domain

  1. Get load balancer DNS name:
eb status
# Or via AWS Console: Elastic Beanstalk → Environment → Configuration → Load balancer
  1. Add CNAME record in your DNS:
Type: CNAME
Name: api
Value: my-backend-prod.eba-xxxxx.us-east-1.elasticbeanstalk.com
TTL: 300

Or use Route 53 Alias record (recommended).

B6. Useful EB CLI Commands

# Environment management
eb list                 # List all environments
eb status              # Show environment status
eb health              # Show environment health
eb open                # Open environment in browser

# Deployment
eb deploy              # Deploy application
eb deploy --staged     # Deploy only staged changes

# Logs and monitoring
eb logs                # Fetch logs
eb logs --stream       # Stream logs in real-time
eb ssh                 # SSH into instance
eb events              # View recent events

# Configuration
eb config              # Edit environment configuration
eb setenv KEY=VALUE    # Set environment variable
eb printenv            # Print environment variables

# Scaling
eb scale 3             # Set number of instances to 3

# Termination
eb terminate           # Terminate environment

Option C: AWS Lambda + API Gateway (Serverless)

Best for: Event-driven APIs, sporadic traffic, pay-per-request pricing

C1. Install Serverless Framework

# Install Serverless Framework globally
npm install -g serverless

# Verify installation
serverless --version

# Alternative: Use npx (no global install)
npx serverless --version

C2. Configure AWS Credentials

# Configure Serverless with AWS credentials
serverless config credentials \
  --provider aws \
  --key AKIA... \
  --secret YOUR_SECRET_KEY \
  --profile serverless

# Or export environment variables
export AWS_ACCESS_KEY_ID=AKIA...
export AWS_SECRET_ACCESS_KEY=YOUR_SECRET_KEY

C3. Prepare Application for Lambda

C3.1. Project Structure for Lambda

For Node.js (Express/NestJS):

Your project should adapt to Lambda handler format:

backend/
├── src/
│   ├── handlers/        # Lambda handler functions
│   │   └── api.js
│   ├── app.js          # Express/NestJS app
│   └── ...
├── serverless.yml      # Serverless configuration
├── package.json
└── .env

C3.2. Install Dependencies

cd /path/to/backend

# Install Serverless plugins
npm install --save-dev serverless-offline serverless-dotenv-plugin

# Install AWS Lambda adapter
npm install aws-serverless-express  # For Express
# Or
npm install @vendia/serverless-express  # Newer fork

C3.3. Create Lambda Handler (Express Example)

Create src/handlers/api.js:

const serverless = require('@vendia/serverless-express');
const app = require('../app');  // Your Express app

// Configure handler
let serverlessHandler;

async function setup() {
  if (!serverlessHandler) {
    serverlessHandler = serverless({ app });
  }
  return serverlessHandler;
}

// Lambda handler
exports.handler = async (event, context) => {
  const handler = await setup();
  return handler(event, context);
};

Modify src/app.js to export app without listening:

const express = require('express');
const app = express();

// Your middleware and routes
app.use(express.json());
app.get('/health', (req, res) => res.json({ status: 'ok' }));
// ... other routes

// Export app without listening
module.exports = app;

// Only listen if not in Lambda
if (require.main === module) {
  const PORT = process.env.PORT || 3000;
  app.listen(PORT, () => {
    console.log(`Server running on port ${PORT}`);
  });
}

C3.4. Create Lambda Handler (NestJS Example)

Create src/lambda.ts:

import { NestFactory } from '@nestjs/core';
import { ExpressAdapter } from '@nestjs/platform-express';
import { AppModule } from './app.module';
import * as express from 'express';
import { Handler, Context } from 'aws-lambda';
import * as serverlessExpress from '@vendia/serverless-express';

let cachedServer: Handler;

async function bootstrap() {
  if (!cachedServer) {
    const expressApp = express();
    const app = await NestFactory.create(
      AppModule,
      new ExpressAdapter(expressApp),
    );

    // Enable CORS
    app.enableCors({
      origin: process.env.FRONTEND_URL || '*',
      credentials: true,
    });

    await app.init();
    cachedServer = serverlessExpress({ app: expressApp });
  }

  return cachedServer;
}

export const handler: Handler = async (
  event: any,
  context: Context,
) => {
  const server = await bootstrap();
  return server(event, context);
};

Update tsconfig.json to output to dist/ folder.

C4. Create serverless.yml Configuration

Create serverless.yml in project root:

service: my-backend-api

frameworkVersion: '3'

provider:
  name: aws
  runtime: nodejs20.x
  region: us-east-1
  stage: ${opt:stage, 'prod'}
  
  # Memory and timeout
  memorySize: 512
  timeout: 30
  
  # Environment variables
  environment:
    NODE_ENV: production
    STAGE: ${self:provider.stage}
    DB_HOST: ${env:DB_HOST}
    DB_PORT: ${env:DB_PORT}
    DB_NAME: ${env:DB_NAME}
    DB_USERNAME: ${env:DB_USERNAME}
    DB_PASSWORD: ${env:DB_PASSWORD}
    JWT_SECRET: ${env:JWT_SECRET}
    FRONTEND_URL: ${env:FRONTEND_URL}
  
  # IAM permissions
  iam:
    role:
      statements:
        - Effect: Allow
          Action:
            - dynamodb:Query
            - dynamodb:Scan
            - dynamodb:GetItem
            - dynamodb:PutItem
            - dynamodb:UpdateItem
            - dynamodb:DeleteItem
          Resource: "arn:aws:dynamodb:${self:provider.region}:*:table/*"
        - Effect: Allow
          Action:
            - s3:GetObject
            - s3:PutObject
            - s3:DeleteObject
          Resource: "arn:aws:s3:::my-bucket/*"
        - Effect: Allow
          Action:
            - ses:SendEmail
            - ses:SendRawEmail
          Resource: "*"
  
  # VPC configuration (if accessing RDS in VPC)
  # vpc:
  #   securityGroupIds:
  #     - sg-xxxxx
  #   subnetIds:
  #     - subnet-xxxxx
  #     - subnet-yyyyy

functions:
  api:
    handler: dist/handlers/api.handler  # For JavaScript
    # handler: dist/lambda.handler      # For TypeScript NestJS
    events:
      - http:
          path: /{proxy+}
          method: ANY
          cors:
            origin: '*'
            headers:
              - Content-Type
              - Authorization
            allowCredentials: true
      - http:
          path: /
          method: ANY
          cors:
            origin: '*'
            headers:
              - Content-Type
              - Authorization

  # Separate functions (alternative approach)
  # getUsers:
  #   handler: dist/handlers/users.getAll
  #   events:
  #     - http:
  #         path: /users
  #         method: GET
  
  # createUser:
  #   handler: dist/handlers/users.create
  #   events:
  #     - http:
  #         path: /users
  #         method: POST

plugins:
  - serverless-offline
  - serverless-dotenv-plugin

# Package configuration
package:
  individually: false
  exclude:
    - .git/**
    - .github/**
    - .vscode/**
    - test/**
    - coverage/**
    - '*.md'
    - .env*
  include:
    - dist/**
    - node_modules/**

# Custom configuration
custom:
  serverless-offline:
    httpPort: 3000
    noPrependStageInUrl: true
  
  # Warm-up plugin (prevent cold starts)
  # warmup:
  #   default:
  #     enabled: true
  #     events:
  #       - schedule: rate(5 minutes)

C5. Deploy to Lambda

C5.1. Build Application

JavaScript:

# No build needed for JavaScript

TypeScript:

# Build TypeScript
npm run build

# Verify dist/ folder exists
ls dist/

C5.2. Deploy

# Deploy to AWS
serverless deploy

# Deploy to specific stage
serverless deploy --stage prod

# Deploy only function code (faster)
serverless deploy function -f api

# Deploy with verbose output
serverless deploy --verbose

Output will show:

Service Information
service: my-backend-api
stage: prod
region: us-east-1
stack: my-backend-api-prod
endpoints:
  ANY - https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod/{proxy+}
  ANY - https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod/
functions:
  api: my-backend-api-prod-api

Your API URL: https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod

C5.3. Test Deployment

# Test health endpoint
curl https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod/health

# Test specific endpoint
curl https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod/api/users

# View logs
serverless logs -f api -t

C6. Configure Custom Domain

C6.1. Request SSL Certificate

# Request certificate in us-east-1 (required for API Gateway)
aws acm request-certificate \
  --domain-name api.yourdomain.com \
  --validation-method DNS \
  --region us-east-1

C6.2. Configure Custom Domain in API Gateway

Via AWS Console:

  1. Go to API GatewayCustom domain names
  2. Click Create
  3. Settings:
    Domain name: api.yourdomain.com
    Certificate: Select your ACM certificate
    Endpoint type: Regional
    
  4. Click Create domain name
  5. Note the API Gateway domain name (e.g., d-xxxxxxxxxx.execute-api.us-east-1.amazonaws.com)

Via Serverless Plugin:

Install plugin:

npm install --save-dev serverless-domain-manager

Add to serverless.yml:

plugins:
  - serverless-domain-manager

custom:
  customDomain:
    domainName: api.yourdomain.com
    certificateName: api.yourdomain.com
    basePath: ''
    stage: ${self:provider.stage}
    createRoute53Record: true
    endpointType: regional

Create domain:

serverless create_domain

Deploy:

serverless deploy

C6.3. Configure DNS

Add CNAME record:

Type: CNAME
Name: api
Value: d-xxxxxxxxxx.execute-api.us-east-1.amazonaws.com
TTL: 300

Or use Route 53 Alias record.

C7. Useful Serverless Commands

# Deployment
serverless deploy                 # Deploy entire service
serverless deploy -f api          # Deploy single function
serverless deploy --stage prod    # Deploy to specific stage

# Information
serverless info                   # Show service info
serverless info --verbose         # Show detailed info

# Logs
serverless logs -f api            # Fetch logs
serverless logs -f api -t         # Stream logs in real-time
serverless logs -f api --startTime 1h  # Logs from last hour

# Invocation
serverless invoke -f api          # Invoke function
serverless invoke -f api -l       # Invoke and show logs
serverless invoke local -f api    # Invoke locally

# Local development
serverless offline                # Run locally
serverless offline --port 3000    # Run on specific port

# Environment
serverless print                  # Print resolved serverless.yml

# Removal
serverless remove                 # Remove service from AWS
serverless remove --stage dev     # Remove specific stage

C8. Optimize Lambda Performance

C8.1. Reduce Cold Starts

Use Lambda Layers for dependencies:

Create layers/nodejs/package.json with heavy dependencies:

{
  "dependencies": {
    "aws-sdk": "^2.1400.0"
  }
}

Update serverless.yml:

layers:
  dependencies:
    path: layers
    name: ${self:provider.stage}-dependencies
    description: Shared dependencies
    compatibleRuntimes:
      - nodejs20.x

functions:
  api:
    handler: dist/handlers/api.handler
    layers:
      - { Ref: DependenciesLambdaLayer }

Enable Provisioned Concurrency:

functions:
  api:
    handler: dist/handlers/api.handler
    provisionedConcurrency: 2  # Keep 2 instances warm

Use Warm-up Plugin:

npm install --save-dev serverless-plugin-warmup
plugins:
  - serverless-plugin-warmup

custom:
  warmup:
    default:
      enabled: true
      events:
        - schedule: rate(5 minutes)
      concurrency: 1

C8.2. Optimize Memory and Timeout

functions:
  api:
    memorySize: 1024  # More memory = more CPU = faster
    timeout: 29       # API Gateway max is 29 seconds

Note: Test different memory sizes for cost/performance balance.

C8.3. Use Environment Variables Efficiently

provider:
  environment:
    CACHE_TTL: 3600
    LOG_LEVEL: info

functions:
  api:
    environment:
      SPECIFIC_CONFIG: value

Option D: AWS ECS with Fargate (Containerized)

Best for: Docker applications, microservices, complex deployments

D1. Prerequisites

# Install Docker
# Download from: https://www.docker.com/

# Verify installation
docker --version
docker-compose --version

# Install AWS CLI (if not already installed)
aws --version

D2. Create Dockerfile

Create Dockerfile in your backend root:

Node.js Example:

# Multi-stage build
FROM node:20-alpine AS builder

# Set working directory
WORKDIR /app

# Copy package files
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy application code
COPY . .

# Build application (if TypeScript)
RUN npm run build

# Production image
FROM node:20-alpine

# Install dumb-init (proper signal handling)
RUN apk add --no-cache dumb-init

# Create non-root user
RUN addgroup -g 1001 -S nodejs && adduser -S nodejs -u 1001

# Set working directory
WORKDIR /app

# Copy node_modules from builder
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules

# Copy built application from builder
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/package*.json ./

# Switch to non-root user
USER nodejs

# Expose port
EXPOSE 3000

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s \
  CMD node -e "require('http').get('http://localhost:3000/health', (r) => { process.exit(r.statusCode === 200 ? 0 : 1) })"

# Start application
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/main.js"]

Python Example:

FROM python:3.11-slim

# Set environment variables
ENV PYTHONUNBUFFERED=1 \
    PYTHONDONTWRITEBYTECODE=1 \
    PIP_NO_CACHE_DIR=1

# Create non-root user
RUN useradd -m -u 1001 appuser

# Set working directory
WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    postgresql-client \
    && rm -rf /var/lib/apt/lists/*

# Copy requirements
COPY requirements.txt .

# Install Python dependencies
RUN pip install --upgrade pip && \
    pip install -r requirements.txt

# Copy application code
COPY --chown=appuser:appuser . .

# Switch to non-root user
USER appuser

# Expose port
EXPOSE 8000

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s \
  CMD python -c "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')"

# Start application
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "4", "app:app"]

Create .dockerignore:

node_modules
npm-debug.log
dist
.git
.gitignore
.env
.env.*
README.md
.vscode
.idea
coverage
test
*.test.js
*.spec.ts
.DS_Store

D3. Test Docker Locally

# Build image
docker build -t my-backend:latest .

# Run container
docker run -p 3000:3000 \
  -e NODE_ENV=production \
  -e DB_HOST=localhost \
  --name backend-test \
  my-backend:latest

# Test endpoint
curl http://localhost:3000/health

# View logs
docker logs -f backend-test

# Stop container
docker stop backend-test
docker rm backend-test

D4. Create ECR Repository

# Create ECR repository
aws ecr create-repository \
  --repository-name my-backend \
  --region us-east-1

# Output will include repository URI:
# 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-backend

D5. Push Image to ECR

# Get ECR login password
aws ecr get-login-password --region us-east-1 | \
  docker login --username AWS --password-stdin \
  123456789012.dkr.ecr.us-east-1.amazonaws.com

# Tag image
docker tag my-backend:latest \
  123456789012.dkr.ecr.us-east-1.amazonaws.com/my-backend:latest

# Push image
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-backend:latest

D6. Create ECS Cluster

Via AWS Console:

  1. Go to ECSClusters
  2. Click Create Cluster
  3. Settings:
    Cluster name: my-backend-cluster
    Infrastructure: AWS Fargate (serverless)
    Namespace: my-backend (optional)
    Tags: (optional)
    
  4. Click Create

Via AWS CLI:

aws ecs create-cluster \
  --cluster-name my-backend-cluster \
  --capacity-providers FARGATE FARGATE_SPOT \
  --default-capacity-provider-strategy \
    capacityProvider=FARGATE,weight=1 \
    capacityProvider=FARGATE_SPOT,weight=1 \
  --region us-east-1

D7. Create Task Definition

Create task-definition.json:

{
  "family": "my-backend-task",
  "networkMode": "awsvpc",
  "requiresCompatibilities": ["FARGATE"],
  "cpu": "512",
  "memory": "1024",
  "executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
  "taskRoleArn": "arn:aws:iam::123456789012:role/ecsTaskRole",
  "containerDefinitions": [
    {
      "name": "backend",
      "image": "123456789012.dkr.ecr.us-east-1.amazonaws.com/my-backend:latest",
      "cpu": 512,
      "memory": 1024,
      "portMappings": [
        {
          "containerPort": 3000,
          "protocol": "tcp"
        }
      ],
      "essential": true,
      "environment": [
        {
          "name": "NODE_ENV",
          "value": "production"
        },
        {
          "name": "PORT",
          "value": "3000"
        }
      ],
      "secrets": [
        {
          "name": "DB_PASSWORD",
          "valueFrom": "arn:aws:secretsmanager:us-east-1:123456789012:secret:db-password"
        },
        {
          "name": "JWT_SECRET",
          "valueFrom": "arn:aws:secretsmanager:us-east-1:123456789012:secret:jwt-secret"
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/my-backend",
          "awslogs-region": "us-east-1",
          "awslogs-stream-prefix": "backend"
        }
      },
      "healthCheck": {
        "command": ["CMD-SHELL", "curl -f http://localhost:3000/health || exit 1"],
        "interval": 30,
        "timeout": 5,
        "retries": 3,
        "startPeriod": 60
      }
    }
  ]
}

Create IAM Roles:

Task Execution Role (allows ECS to pull images, write logs):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ecr:GetAuthorizationToken",
        "ecr:BatchCheckLayerAvailability",
        "ecr:GetDownloadUrlForLayer",
        "ecr:BatchGetImage",
        "logs:CreateLogStream",
        "logs:PutLogEvents",
        "secretsmanager:GetSecretValue"
      ],
      "Resource": "*"
    }
  ]
}

Task Role (allows container to access AWS services):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject"
      ],
      "Resource": "arn:aws:s3:::my-bucket/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:*"
      ],
      "Resource": "arn:aws:dynamodb:us-east-1:*:table/*"
    }
  ]
}

Register task definition:

# Create CloudWatch log group
aws logs create-log-group --log-group-name /ecs/my-backend

# Register task definition
aws ecs register-task-definition \
  --cli-input-json file://task-definition.json

D8. Create Application Load Balancer

Via AWS Console:

  1. Go to EC2Load Balancers
  2. Click Create Load BalancerApplication Load Balancer
  3. Settings:
    Name: my-backend-alb
    Scheme: Internet-facing
    IP address type: IPv4
    VPC: Default (or your VPC)
    Availability Zones: Select at least 2
    Security Group: Create new (allow HTTP 80, HTTPS 443)
    
  4. Listeners:
    • HTTP:80 → Redirect to HTTPS
    • HTTPS:443 → Forward to target group
  5. SSL Certificate: Select from ACM
  6. Click Create

Create Target Group:

Name: my-backend-tg
Target type: IP
Protocol: HTTP
Port: 3000
VPC: (your VPC)
Health check path: /health

D9. Create ECS Service

# Create service
aws ecs create-service \
  --cluster my-backend-cluster \
  --service-name my-backend-service \
  --task-definition my-backend-task:1 \
  --desired-count 2 \
  --launch-type FARGATE \
  --platform-version LATEST \
  --network-configuration "awsvpcConfiguration={subnets=[subnet-xxx,subnet-yyy],securityGroups=[sg-xxx],assignPublicIp=ENABLED}" \
  --load-balancers "targetGroupArn=arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/my-backend-tg/xxx,containerName=backend,containerPort=3000" \
  --health-check-grace-period-seconds 60

D10. Configure Auto Scaling

# Register scalable target
aws application-autoscaling register-scalable-target \
  --service-namespace ecs \
  --resource-id service/my-backend-cluster/my-backend-service \
  --scalable-dimension ecs:service:DesiredCount \
  --min-capacity 2 \
  --max-capacity 10

# Create scaling policy (target tracking)
aws application-autoscaling put-scaling-policy \
  --service-namespace ecs \
  --resource-id service/my-backend-cluster/my-backend-service \
  --scalable-dimension ecs:service:DesiredCount \
  --policy-name cpu-scaling \
  --policy-type TargetTrackingScaling \
  --target-tracking-scaling-policy-configuration file://scaling-policy.json

scaling-policy.json:

{
  "TargetValue": 70.0,
  "PredefinedMetricSpecification": {
    "PredefinedMetricType": "ECSServiceAverageCPUUtilization"
  },
  "ScaleInCooldown": 300,
  "ScaleOutCooldown": 60
}

D11. Update Service (Deployments)

# Build new image
docker build -t my-backend:v2 .

# Tag and push
docker tag my-backend:v2 \
  123456789012.dkr.ecr.us-east-1.amazonaws.com/my-backend:v2
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-backend:v2

# Update task definition with new image tag
# (Modify task-definition.json, change image tag to :v2)

# Register new task definition revision
aws ecs register-task-definition \
  --cli-input-json file://task-definition.json

# Update service to use new task definition
aws ecs update-service \
  --cluster my-backend-cluster \
  --service my-backend-service \
  --task-definition my-backend-task:2 \
  --force-new-deployment

D12. Useful ECS Commands

# List clusters
aws ecs list-clusters

# List services
aws ecs list-services --cluster my-backend-cluster

# Describe service
aws ecs describe-services \
  --cluster my-backend-cluster \
  --services my-backend-service

# List tasks
aws ecs list-tasks \
  --cluster my-backend-cluster \
  --service-name my-backend-service

# Describe task
aws ecs describe-tasks \
  --cluster my-backend-cluster \
  --tasks arn:aws:ecs:us-east-1:123456789012:task/xxx

# View logs
aws logs tail /ecs/my-backend --follow

# Scale service
aws ecs update-service \
  --cluster my-backend-cluster \
  --service my-backend-service \
  --desired-count 5

# Stop task (force redeployment)
aws ecs stop-task \
  --cluster my-backend-cluster \
  --task arn:aws:ecs:us-east-1:123456789012:task/xxx

Part 2: Netlify Frontend Deployment

Step 1: Prepare Frontend for Netlify

1.1. Verify Build Configuration

Different frameworks have different build requirements:

React (Create React App):

package.json:

{
  "scripts": {
    "build": "react-scripts build"
  }
}

Build output: build/ directory

React (Vite):

package.json:

{
  "scripts": {
    "build": "vite build"
  }
}

Build output: dist/ directory

Next.js (Static Export):

next.config.js:

/** @type {import('next').NextConfig} */
const nextConfig = {
  output: 'export',  // Enable static export
  images: {
    unoptimized: true,  // Required for static export
  },
  trailingSlash: true,
  reactStrictMode: true,
}

module.exports = nextConfig

package.json:

{
  "scripts": {
    "build": "next build"
  }
}

Build output: out/ directory

Next.js (SSR with Netlify):

Install Netlify plugin:

npm install -D @netlify/plugin-nextjs

netlify.toml:

[build]
  command = "npm run build"
  publish = ".next"

[[plugins]]
  package = "@netlify/plugin-nextjs"

Vue.js (Vue CLI):

package.json:

{
  "scripts": {
    "build": "vue-cli-service build"
  }
}

Build output: dist/ directory

Nuxt.js:

nuxt.config.js:

export default {
  target: 'static',  // For static generation
  generate: {
    fallback: true
  }
}

package.json:

{
  "scripts": {
    "generate": "nuxt generate"
  }
}

Build command: npm run generate Build output: dist/ directory

Angular:

package.json:

{
  "scripts": {
    "build": "ng build --configuration production"
  }
}

Build output: dist/project-name/ directory

Svelte/SvelteKit:

svelte.config.js:

import adapter from '@sveltejs/adapter-static';

export default {
  kit: {
    adapter: adapter({
      pages: 'build',
      assets: 'build',
      fallback: null
    })
  }
};

Build output: build/ directory

Gatsby:

Automatically configured for Netlify.

Build output: public/ directory

1.2. Configure API URL

Environment Variable Approach (Recommended):

Create .env.production:

# API Backend URL
REACT_APP_API_URL=https://api.yourdomain.com
# Or for Next.js:
NEXT_PUBLIC_API_URL=https://api.yourdomain.com
# Or for Vue:
VUE_APP_API_URL=https://api.yourdomain.com
# Or for Angular (environment.prod.ts):
# apiUrl: 'https://api.yourdomain.com'

⚠️ Important Prefixes:

  • React (CRA): REACT_APP_
  • Next.js: NEXT_PUBLIC_
  • Vue: VUE_APP_
  • Vite: VITE_

Usage in Code:

React/Next.js:

const API_URL = process.env.REACT_APP_API_URL || 'http://localhost:3000';
// or
const API_URL = process.env.NEXT_PUBLIC_API_URL || 'http://localhost:3000';

// Make API calls
fetch(`${API_URL}/api/users`)
  .then(res => res.json())
  .then(data => console.log(data));

Vue:

const API_URL = process.env.VUE_APP_API_URL || 'http://localhost:3000';

Angular (environment.prod.ts):

export const environment = {
  production: true,
  apiUrl: 'https://api.yourdomain.com'
};

1.3. Test Build Locally

# Install dependencies
npm install

# Run build
npm run build

# Verify output directory
ls build/    # or dist/ or out/ depending on framework

# Test built site locally (optional)
npx serve build -s
# or
npx serve dist -s
# or
npx serve out -s

# Visit http://localhost:3000 and test

Ensure:

  • Build completes without errors
  • No broken links or missing assets
  • API calls work (if backend is running)

1.4. Create netlify.toml (Recommended)

Create netlify.toml in project root:

React (CRA) / Vue / Angular:

[build]
  command = "npm run build"
  publish = "build"  # or "dist" for Vue/Angular

[build.environment]
  NODE_VERSION = "20"
  NPM_VERSION = "10"

# Redirect rules for SPA
[[redirects]]
  from = "/*"
  to = "/index.html"
  status = 200

# Security headers
[[headers]]
  for = "/*"
  [headers.values]
    X-Frame-Options = "DENY"
    X-Content-Type-Options = "nosniff"
    X-XSS-Protection = "1; mode=block"
    Referrer-Policy = "strict-origin-when-cross-origin"
    Permissions-Policy = "camera=(), microphone=(), geolocation=()"
    Strict-Transport-Security = "max-age=31536000; includeSubDomains; preload"

# Cache static assets
[[headers]]
  for = "/static/*"
  [headers.values]
    Cache-Control = "public, max-age=31536000, immutable"

[[headers]]
  for = "/*.js"
  [headers.values]
    Cache-Control = "public, max-age=31536000, immutable"

[[headers]]
  for = "/*.css"
  [headers.values]
    Cache-Control = "public, max-age=31536000, immutable"

Next.js (Static):

[build]
  command = "npm run build"
  publish = "out"

[build.environment]
  NODE_VERSION = "20"

[[redirects]]
  from = "/*"
  to = "/index.html"
  status = 200

[[headers]]
  for = "/*"
  [headers.values]
    X-Frame-Options = "DENY"
    X-Content-Type-Options = "nosniff"
    X-XSS-Protection = "1; mode=block"

[[headers]]
  for = "/_next/static/*"
  [headers.values]
    Cache-Control = "public, max-age=31536000, immutable"

Next.js (SSR with plugin):

[build]
  command = "npm run build"
  publish = ".next"

[[plugins]]
  package = "@netlify/plugin-nextjs"

Gatsby:

[build]
  command = "gatsby build"
  publish = "public"

[build.environment]
  NODE_VERSION = "20"

1.5. Configure .gitignore

Ensure build outputs are NOT committed:

# dependencies
node_modules/

# production build
build/
dist/
out/
.next/
public/  # For Gatsby/SvelteKit

# environment variables
.env
.env.local
.env.production.local
.env.development.local
.env.test.local

# logs
npm-debug.log*
yarn-debug.log*
yarn-error.log*

# misc
.DS_Store
.vscode/
.idea/

# Netlify
.netlify/

Step 2: Push Code to Git Repository

2.1. Initialize Git (if not already done)

# Initialize git
git init

# Add remote repository
git remote add origin https://github.com/your-username/your-frontend-repo.git

# Or for SSH:
git remote add origin [email protected]:your-username/your-frontend-repo.git

2.2. Commit and Push

# Add all files
git add .

# Commit
git commit -m "feat: prepare for Netlify deployment"

# Push to main branch
git push -u origin main

# Or if using master branch:
git push -u origin master

⚠️ Important: Ensure you've created the repository on GitHub/GitLab/Bitbucket first.

Step 3: Connect Repository to Netlify

3.1. Sign in to Netlify

  1. Go to app.netlify.com
  2. Sign in with your Git provider (GitHub/GitLab/Bitbucket)
  3. Authorize Netlify to access your repositories

3.2. Import Project

  1. Click "Add new site""Import an existing project"
  2. Select your Git provider
  3. Authorize Netlify (if first time)
  4. Select your repository from the list
  5. If repository not visible:
    • Click "Configure Netlify on GitHub"
    • Grant access to specific repository or all repositories

3.3. Configure Build Settings

Site settings:

Branch to deploy: main (or master)

Build settings:

Framework Build Command Publish Directory
React (CRA) npm run build build
React (Vite) npm run build dist
Next.js (Static) npm run build out
Next.js (SSR) npm run build .next
Vue CLI npm run build dist
Nuxt.js npm run generate dist
Angular npm run build dist/project-name
Svelte npm run build build
SvelteKit npm run build build
Gatsby gatsby build public
Astro npm run build dist

Advanced build settings:

Click "Show advanced":

Base directory: (leave empty unless monorepo)
Functions directory: (leave empty unless using Netlify Functions)

Environment variables:

Add environment variables (click "Add environment variable"):

REACT_APP_API_URL = https://api.yourdomain.com
# or
NEXT_PUBLIC_API_URL = https://api.yourdomain.com
# or
VUE_APP_API_URL = https://api.yourdomain.com

Build settings (alternative):

If using netlify.toml, build settings are read from file. Console settings override file settings.

3.4. Deploy Site

  1. Click "Deploy site"
  2. Netlify will:
    • Clone your repository
    • Install dependencies (npm install)
    • Run build command
    • Deploy to CDN
    • Assign temporary URL

Build process (example):

9:00:00 AM: Build ready to start
9:00:02 AM: build-image version: 12345abcde
9:00:02 AM: Fetching cached dependencies
9:00:05 AM: Installing dependencies
9:00:05 AM: Installing npm packages
9:02:30 AM: npm packages installed
9:02:31 AM: Started building the site
9:02:31 AM: Running build command: npm run build
9:04:45 AM: Build complete
9:04:46 AM: Deploying to production
9:04:58 AM: Site is live!

Wait 2-10 minutes (depending on project size).

3.5. Verify Deployment

  1. Check Build Logs:

    • Go to Deploys tab
    • Click on latest deployment
    • View build logs for any errors
  2. Visit Your Site:

    • Netlify assigns a random URL: https://random-name-12345.netlify.app
    • Click "Open production deploy"
    • Test all functionality
  3. Check for Errors:

    • Open browser DevTools (F12)
    • Check Console for JavaScript errors
    • Check Network tab for failed API calls
    • Verify environment variables loaded correctly

Step 4: Configure Environment Variables

4.1. Access Environment Variables

  1. In Netlify dashboard, go to Site configurationEnvironment variables
  2. Or from sidebar: Site settingsEnvironment variables

4.2. Add Variables

Click "Add a variable""Add a single variable"

Common environment variables:

Key: REACT_APP_API_URL
Value: https://api.yourdomain.com
Scopes: Production (or All scopes)

Key: REACT_APP_ENVIRONMENT
Value: production
Scopes: Production

Key: REACT_APP_VERSION
Value: 1.0.0
Scopes: All scopes

For different deploy contexts:

Production: Used for production deployments
Deploy previews: Used for pull request previews
Branch deploys: Used for branch deployments

Example:

# Production
REACT_APP_API_URL = https://api.yourdomain.com

# Deploy previews (PRs)
REACT_APP_API_URL = https://staging-api.yourdomain.com

# Branch deploys (dev branch)
REACT_APP_API_URL = https://dev-api.yourdomain.com

4.3. Redeploy After Adding Variables

Environment variables are only available after redeployment:

  1. Go to Deploys tab
  2. Click "Trigger deploy""Clear cache and deploy site"
  3. Wait for deployment to complete

Verify variables:

Add a debug endpoint or console log (temporarily):

console.log('API URL:', process.env.REACT_APP_API_URL);

Check browser console after deployment.

Step 5: Configure Deploy Settings

5.1. Deploy Contexts

Site configurationBuild & deployDeploy contexts

Production branch:

Branch: main

Branch deploys:

  • All branches: Deploy all branches (useful for testing)
  • Only production branch: Deploy only main/master
  • Let me add individual branches: Select specific branches

Deploy previews:

  • Any pull request against your production branch: Recommended
  • None: Disable PR previews

5.2. Build Hooks

Create webhooks to trigger deployments:

  1. Go to Build hooks
  2. Click "Add build hook"
  3. Settings:
    Build hook name: Deploy from external source
    Branch to build: main
    
  4. Click "Save"
  5. Copy webhook URL: https://api.netlify.com/build_hooks/xxxxx

Use cases:

  • Trigger deploy from CI/CD pipeline
  • Deploy when CMS content changes
  • Scheduled deployments

Trigger via curl:

curl -X POST -d {} https://api.netlify.com/build_hooks/xxxxx

5.3. Post Processing

Asset optimization:

Site configurationBuild & deployPost processing

✓ Bundle CSS: Combine CSS files
✓ Minify CSS: Minimize CSS file sizes
✓ Minify JS: Minimize JavaScript file sizes
✓ Compress images: Lossless image compression
✓ Pretty URLs: Strip .html extension from URLs

⚠️ Note: May increase build time. Test before enabling in production.

Step 6: Configure Redirects and Rewrites

6.1. SPA Redirect (Single Page Applications)

For React, Vue, Angular apps that use client-side routing:

Method 1: netlify.toml (Recommended)

Already configured in Step 1.4.

Method 2: _redirects file

Create public/_redirects (or in publish directory):

/*  /index.html  200

Method 3: Netlify UI

Site configurationRedirects and rewritesAdd rule

From: /*
To: /index.html
Status: 200

6.2. API Proxy (Avoid CORS)

Proxy API requests through Netlify:

netlify.toml:

[[redirects]]
  from = "/api/*"
  to = "https://api.yourdomain.com/:splat"
  status = 200
  force = true
  headers = {X-From = "Netlify"}

Then in frontend code:

// Instead of:
fetch('https://api.yourdomain.com/users')

// Use:
fetch('/api/users')

Benefits:

  • Avoid CORS issues during development
  • Hide actual API URL from client
  • Easier URL management

6.3. Redirect Rules Examples

Redirect HTTP to HTTPS:

[[redirects]]
  from = "http://yourdomain.com/*"
  to = "https://yourdomain.com/:splat"
  status = 301
  force = true

Redirect www to non-www:

[[redirects]]
  from = "https://www.yourdomain.com/*"
  to = "https://yourdomain.com/:splat"
  status = 301
  force = true

Redirect old paths:

[[redirects]]
  from = "/old-page"
  to = "/new-page"
  status = 301

[[redirects]]
  from = "/blog/*"
  to = "/articles/:splat"
  status = 301

Language redirects based on geolocation:

[[redirects]]
  from = "/*"
  to = "/en/:splat"
  status = 302
  conditions = {Country = ["US", "CA", "GB"]}

[[redirects]]
  from = "/*"
  to = "/es/:splat"
  status = 302
  conditions = {Country = ["ES", "MX", "AR"]}

Step 7: Configure Forms (Optional)

Netlify provides built-in form handling:

7.1. HTML Forms

Add netlify attribute to form:

<form name="contact" method="POST" data-netlify="true">
  <input type="hidden" name="form-name" value="contact">
  <input type="text" name="name" required>
  <input type="email" name="email" required>
  <textarea name="message" required></textarea>
  <button type="submit">Send</button>
</form>

7.2. React Forms

function ContactForm() {
  const handleSubmit = (e) => {
    e.preventDefault();
    const form = e.target;
    
    fetch('/', {
      method: 'POST',
      headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
      body: new URLSearchParams(new FormData(form)).toString()
    })
      .then(() => alert('Form submitted!'))
      .catch((error) => alert(error));
  };

  return (
    <form name="contact" method="POST" onSubmit={handleSubmit} data-netlify="true">
      <input type="hidden" name="form-name" value="contact" />
      <input type="text" name="name" required />
      <input type="email" name="email" required />
      <textarea name="message" required></textarea>
      <button type="submit">Send</button>
    </form>
  );
}

7.3. Form Notifications

Site configurationFormsForm notifications

Configure email notifications or webhooks when form is submitted.

7.4. Spam Protection

Enable reCAPTCHA or honeypot:

<form name="contact" method="POST" data-netlify="true" data-netlify-recaptcha="true">
  <!-- form fields -->
  <div data-netlify-recaptcha="true"></div>
  <button type="submit">Send</button>
</form>

Step 8: Test Deployment

8.1. Functional Testing

Test all major features:

✓ Homepage loads
✓ Navigation works
✓ All pages accessible
✓ Images load correctly
✓ Forms submit (if applicable)
✓ API calls work
✓ Authentication works (if applicable)
✓ Responsive design (mobile/tablet)

8.2. Performance Testing

Use Chrome DevTools Lighthouse or GTmetrix:

Check:

  • Performance score
  • First Contentful Paint
  • Largest Contentful Paint
  • Time to Interactive
  • Total Blocking Time
  • Cumulative Layout Shift

8.3. API Integration Testing

Test API endpoints from deployed frontend:

// Test health endpoint
fetch('https://api.yourdomain.com/health')
  .then(res => res.json())
  .then(data => console.log('Backend healthy:', data))
  .catch(err => console.error('Backend error:', err));

// Test actual endpoint
fetch('https://api.yourdomain.com/api/users')
  .then(res => res.json())
  .then(data => console.log('Users:', data))
  .catch(err => console.error('Error:', err));

Check browser console for:

  • CORS errors
  • API errors
  • Network failures
  • Authentication issues

8.4. Cross-Browser Testing

Test on multiple browsers:

  • Chrome
  • Firefox
  • Safari
  • Edge
  • Mobile browsers (iOS Safari, Android Chrome)

Use tools:

Step 9: Configure Custom Domain

9.1. Add Custom Domain in Netlify

  1. Go to Domain managementDomains
  2. Click "Add a domain"
  3. Enter your domain: yourdomain.com
  4. Click "Verify"

Netlify will check if you own the domain.

9.2. Configure DNS

You have two options:

Option A: Use Netlify DNS (Recommended)

  1. Netlify will show nameservers:

    dns1.p01.nsone.net
    dns2.p01.nsone.net
    dns3.p01.nsone.net
    dns4.p01.nsone.net
    
  2. Update nameservers at your domain registrar (GoDaddy, Namecheap, etc.):

    • Log in to your domain registrar
    • Find DNS/Nameserver settings
    • Replace existing nameservers with Netlify's nameservers
    • Save changes
  3. Wait for DNS propagation (2-48 hours, usually < 1 hour)

  4. Verify:

    nslookup yourdomain.com
    dig yourdomain.com

Option B: Use External DNS

Keep your current DNS provider and add CNAME record:

Type: CNAME
Name: www (or @)
Value: random-name-12345.netlify.app
TTL: 300

For apex/root domain (@):

Type: A
Name: @
Value: 75.2.60.5
TTL: 300

Or use ALIAS/ANAME record (if provider supports):

Type: ALIAS
Name: @
Value: random-name-12345.netlify.app
TTL: 300

9.3. Configure Domain Aliases

Add www subdomain:

  1. In Netlify, go to Domain managementDomains
  2. Click "Add domain alias"
  3. Enter: www.yourdomain.com
  4. Netlify automatically redirects www to non-www (or vice versa)

Configure redirect preference:

  • Primary domain: yourdomain.com
  • Redirect: www.yourdomain.comyourdomain.com (or opposite)

9.4. Verify SSL Certificate

Netlify automatically provisions SSL certificate:

  1. Domain managementHTTPS
  2. Wait for certificate provisioning (can take up to 24 hours)
  3. Status should show: "Netlify provides HTTPS for your site"

Force HTTPS redirect:

✓ Force HTTPS on all pages

HSTS (HTTP Strict Transport Security):

✓ Enable HSTS

Step 10: Configure Functions (Optional)

Netlify Functions allow serverless backend functionality.

10.1. Create Functions Directory

mkdir netlify/functions

10.2. Create Function

Create netlify/functions/hello.js:

exports.handler = async (event, context) => {
  return {
    statusCode: 200,
    headers: {
      'Content-Type': 'application/json',
      'Access-Control-Allow-Origin': '*'
    },
    body: JSON.stringify({
      message: 'Hello from Netlify Function!',
      timestamp: new Date().toISOString()
    })
  };
};

10.3. Configure in netlify.toml

[build]
  functions = "netlify/functions"

10.4. Use Function

After deployment, function available at:

https://yourdomain.com/.netlify/functions/hello

Call from frontend:

fetch('/.netlify/functions/hello')
  .then(res => res.json())
  .then(data => console.log(data));

10.5. Advanced Function Example (Database Query)

Create netlify/functions/get-users.js:

const { Client } = require('pg');

exports.handler = async (event, context) => {
  // Database connection
  const client = new Client({
    host: process.env.DB_HOST,
    port: process.env.DB_PORT,
    user: process.env.DB_USER,
    password: process.env.DB_PASSWORD,
    database: process.env.DB_NAME,
    ssl: { rejectUnauthorized: false }
  });

  try {
    await client.connect();
    
    const result = await client.query('SELECT * FROM users LIMIT 10');
    
    return {
      statusCode: 200,
      headers: {
        'Content-Type': 'application/json',
        'Access-Control-Allow-Origin': '*'
      },
      body: JSON.stringify(result.rows)
    };
  } catch (error) {
    console.error('Database error:', error);
    
    return {
      statusCode: 500,
      body: JSON.stringify({ error: 'Database query failed' })
    };
  } finally {
    await client.end();
  }
};

Install dependencies:

cd netlify/functions
npm init -y
npm install pg

Database Setup on AWS

Option 1: Amazon RDS (Relational Databases)

1.1. Choose Database Engine

  • PostgreSQL - Most popular, feature-rich, open-source
  • MySQL - Widely used, good performance
  • MariaDB - MySQL fork with additional features
  • Amazon Aurora - AWS-managed, MySQL/PostgreSQL compatible (more expensive but better performance)
  • SQL Server - Microsoft SQL Server
  • Oracle - Enterprise database

Recommendation: PostgreSQL for most applications

1.2. Create RDS Instance

Via AWS Console:

  1. Go to RDSDatabasesCreate database

  2. Choose creation method:

    • Standard create (more options)
    • Easy create (simplified)
  3. Engine options:

    Engine type: PostgreSQL
    Version: PostgreSQL 15.4 (or latest)
    
  4. Templates:

    • Production (Multi-AZ, enhanced monitoring)
    • Dev/Test (Single-AZ)
    • Free tier (t3.micro, 20GB, single-AZ)
  5. Settings:

    DB instance identifier: my-app-db
    Master username: dbadmin
    Master password: YourSecurePassword123!
    Confirm password: YourSecurePassword123!
    
  6. Instance configuration:

    DB instance class: db.t3.micro (Free tier) or db.t3.small
    Storage type: General Purpose SSD (gp3)
    Allocated storage: 20 GB
    Storage autoscaling: Enable (max: 100 GB)
    
  7. Availability & durability:

    Multi-AZ deployment: 
      - No (dev/test)
      - Yes (production - automatic failover)
    
  8. Connectivity:

    VPC: Default VPC (or custom)
    Subnet group: default
    Public access: Yes (if accessing from outside AWS)
                   No (if only from EC2/Lambda in same VPC)
    VPC security group: Create new
      - Name: rds-security-group
    Availability Zone: No preference
    
  9. Database authentication:

    Password authentication (standard)
    Password and IAM database authentication (for IAM roles)
    Password and Kerberos authentication (for enterprise)
    
  10. Additional configuration:

    Initial database name: myapp_db
    DB parameter group: default
    Option group: default
    Backup:
      - Enable automatic backups
      - Backup retention period: 7 days
      - Backup window: No preference
    Encryption:
      - Enable encryption (recommended for production)
    Monitoring:
      - Enable Enhanced Monitoring
      - Granularity: 60 seconds
    Maintenance:
      - Enable auto minor version upgrade
    Deletion protection: Enable (for production)
    
  11. Click Create database

Wait 5-15 minutes for database creation.

1.3. Configure Security Group

  1. Go to RDSDatabases → Select your database
  2. Click on VPC security groups link
  3. Edit inbound rules:

For public access (development):

Type: PostgreSQL (or MySQL)
Port: 5432 (PostgreSQL) or 3306 (MySQL)
Source: My IP (your current IP)
Description: Allow from my IP

For EC2 access:

Type: PostgreSQL
Port: 5432
Source: sg-xxxxx (EC2 security group ID)
Description: Allow from EC2 instances

For Lambda access (same VPC):

Type: PostgreSQL
Port: 5432
Source: sg-xxxxx (Lambda security group ID)
Description: Allow from Lambda functions

1.4. Get Connection Details

  1. Go to RDSDatabases → Select your database
  2. Copy Endpoint & port:
    Endpoint: my-app-db.xxxxx.us-east-1.rds.amazonaws.com
    Port: 5432
    

1.5. Connect to Database

From local machine (psql):

# Install PostgreSQL client
sudo apt install postgresql-client  # Ubuntu
brew install postgresql              # macOS

# Connect to database
psql -h my-app-db.xxxxx.us-east-1.rds.amazonaws.com \
     -U dbadmin \
     -d myapp_db \
     -p 5432

# Enter password when prompted

From application (Node.js):

const { Pool } = require('pg');

const pool = new Pool({
  host: 'my-app-db.xxxxx.us-east-1.rds.amazonaws.com',
  port: 5432,
  database: 'myapp_db',
  user: 'dbadmin',
  password: 'YourSecurePassword123!',
  ssl: {
    rejectUnauthorized: false
  },
  max: 20,
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 2000,
});

// Test connection
pool.query('SELECT NOW()', (err, res) => {
  if (err) {
    console.error('Database connection error:', err);
  } else {
    console.log('Database connected:', res.rows[0].now);
  }
});

From application (Python):

import psycopg2

conn = psycopg2.connect(
    host="my-app-db.xxxxx.us-east-1.rds.amazonaws.com",
    port=5432,
    database="myapp_db",
    user="dbadmin",
    password="YourSecurePassword123!",
    sslmode="require"
)

# Test connection
cur = conn.cursor()
cur.execute('SELECT version()')
version = cur.fetchone()
print(f'PostgreSQL version: {version}')
cur.close()
conn.close()

1.6. Create Database Schema

-- Connect to database
\c myapp_db

-- Create users table
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    email VARCHAR(255) UNIQUE NOT NULL,
    username VARCHAR(100) UNIQUE NOT NULL,
    password_hash VARCHAR(255) NOT NULL,
    first_name VARCHAR(100),
    last_name VARCHAR(100),
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Create index
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_username ON users(username);

-- Create posts table
CREATE TABLE posts (
    id SERIAL PRIMARY KEY,
    user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
    title VARCHAR(255) NOT NULL,
    content TEXT,
    published BOOLEAN DEFAULT FALSE,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Create index
CREATE INDEX idx_posts_user_id ON posts(user_id);

-- Verify tables created
\dt

1.7. Database Migration (Using TypeORM - Node.js)

Install TypeORM:

npm install typeorm pg reflect-metadata

Create ormconfig.json:

{
  "type": "postgres",
  "host": "my-app-db.xxxxx.us-east-1.rds.amazonaws.com",
  "port": 5432,
  "username": "dbadmin",
  "password": "YourSecurePassword123!",
  "database": "myapp_db",
  "synchronize": false,
  "logging": true,
  "entities": ["src/entities/**/*.ts"],
  "migrations": ["src/migrations/**/*.ts"],
  "subscribers": ["src/subscribers/**/*.ts"],
  "cli": {
    "entitiesDir": "src/entities",
    "migrationsDir": "src/migrations",
    "subscribersDir": "src/subscribers"
  },
  "ssl": {
    "rejectUnauthorized": false
  }
}

Create migration:

npx typeorm migration:create -n InitialSchema

Run migration:

npx typeorm migration:run

1.8. Backup and Restore

Automated backups (configured during creation):

  • RDS automatically backs up database
  • Retention: 1-35 days
  • Point-in-time recovery available

Manual snapshot:

  1. Go to RDSDatabases → Select database
  2. ActionsTake snapshot
  3. Enter snapshot name
  4. Click Take snapshot

Restore from snapshot:

  1. Go to RDSSnapshots
  2. Select snapshot
  3. ActionsRestore snapshot
  4. Configure new instance settings
  5. Click Restore DB instance

Export database (pg_dump):

# Export to SQL file
pg_dump -h my-app-db.xxxxx.us-east-1.rds.amazonaws.com \
        -U dbadmin \
        -d myapp_db \
        -F c \
        -f backup.dump

# Restore from SQL file
pg_restore -h my-app-db.xxxxx.us-east-1.rds.amazonaws.com \
           -U dbadmin \
           -d myapp_db \
           -F c \
           backup.dump

Option 2: Amazon DynamoDB (NoSQL)

2.1. Create DynamoDB Table

Via AWS Console:

  1. Go to DynamoDBTablesCreate table

  2. Table details:

    Table name: Users
    Partition key: userId (String)
    Sort key: (optional) timestamp (Number)
    
  3. Table settings:

    • Customize settings
    • Default settings (easier)
  4. Table class:

    • DynamoDB Standard (frequent access)
    • DynamoDB Standard-IA (infrequent access, cheaper)
  5. Capacity mode:

    • On-demand: Pay per request (good for unpredictable traffic)
    • Provisioned: Set read/write capacity units (cheaper for predictable traffic)
      • Read capacity: 5 units
      • Write capacity: 5 units
      • Auto scaling: Enable
  6. Encryption:

    • Owned by Amazon DynamoDB (free)
    • AWS managed key (KMS - additional cost)
    • Customer managed key (KMS - most control)
  7. Click Create table

2.2. Add Global Secondary Index (GSI)

  1. Select your table
  2. Indexes tab → Create index
  3. Settings:
    Partition key: email (String)
    Sort key: (optional)
    Index name: EmailIndex
    Projected attributes: All
    
  4. Click Create index

2.3. Use DynamoDB in Application (Node.js)

const AWS = require('aws-sdk');

// Configure AWS SDK
AWS.config.update({
  region: 'us-east-1',
  accessKeyId: process.env.AWS_ACCESS_KEY_ID,
  secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
});

const dynamodb = new AWS.DynamoDB.DocumentClient();

// Put item
const putItem = async (userId, data) => {
  const params = {
    TableName: 'Users',
    Item: {
      userId: userId,
      email: data.email,
      username: data.username,
      createdAt: Date.now()
    }
  };

  try {
    await dynamodb.put(params).promise();
    console.log('Item added successfully');
  } catch (error) {
    console.error('Error adding item:', error);
  }
};

// Get item
const getItem = async (userId) => {
  const params = {
    TableName: 'Users',
    Key: {
      userId: userId
    }
  };

  try {
    const result = await dynamodb.get(params).promise();
    return result.Item;
  } catch (error) {
    console.error('Error getting item:', error);
  }
};

// Query by GSI
const getUserByEmail = async (email) => {
  const params = {
    TableName: 'Users',
    IndexName: 'EmailIndex',
    KeyConditionExpression: 'email = :email',
    ExpressionAttributeValues: {
      ':email': email
    }
  };

  try {
    const result = await dynamodb.query(params).promise();
    return result.Items[0];
  } catch (error) {
    console.error('Error querying by email:', error);
  }
};

// Update item
const updateItem = async (userId, updates) => {
  const params = {
    TableName: 'Users',
    Key: {
      userId: userId
    },
    UpdateExpression: 'set username = :username, updatedAt = :updatedAt',
    ExpressionAttributeValues: {
      ':username': updates.username,
      ':updatedAt': Date.now()
    },
    ReturnValues: 'ALL_NEW'
  };

  try {
    const result = await dynamodb.update(params).promise();
    return result.Attributes;
  } catch (error) {
    console.error('Error updating item:', error);
  }
};

// Delete item
const deleteItem = async (userId) => {
  const params = {
    TableName: 'Users',
    Key: {
      userId: userId
    }
  };

  try {
    await dynamodb.delete(params).promise();
    console.log('Item deleted successfully');
  } catch (error) {
    console.error('Error deleting item:', error);
  }
};

Option 3: MongoDB on AWS (DocumentDB or EC2)

3.1. Amazon DocumentDB (MongoDB-compatible)

Create Cluster:

  1. Go to Amazon DocumentDBClustersCreate
  2. Configuration:
    Cluster identifier: my-mongodb-cluster
    Engine version: 5.0
    Instance class: db.t3.medium
    Number of instances: 1 (or 3 for production)
    Authentication: Username and password
    Username: dbadmin
    Password: YourSecurePassword123!
    VPC: Default
    Subnet group: default
    
  3. Click Create cluster

Connection string:

mongodb://dbadmin:YourSecurePassword123!@my-mongodb-cluster.cluster-xxxxx.us-east-1.docdb.amazonaws.com:27017/?ssl=true&replicaSet=rs0&readPreference=secondaryPreferred

3.2. MongoDB on EC2 (Self-Managed)

# SSH into EC2
ssh -i ~/.ssh/my-key.pem ubuntu@YOUR_EC2_IP

# Import MongoDB public GPG key
curl -fsSL https://www.mongodb.org/static/pgp/server-7.0.asc | \
   sudo gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg --dearmor

# Add MongoDB repository
echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse" | \
  sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list

# Update package database
sudo apt update

# Install MongoDB
sudo apt install -y mongodb-org

# Start MongoDB
sudo systemctl start mongod
sudo systemctl enable mongod

# Check status
sudo systemctl status mongod

# Secure MongoDB
mongo
> use admin
> db.createUser({
    user: "admin",
    pwd: "YourSecurePassword123!",
    roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
  })
> exit

# Edit MongoDB config
sudo nano /etc/mongod.conf

# Enable authentication
security:
  authorization: enabled

# Allow remote connections
net:
  bindIp: 0.0.0.0
  port: 27017

# Restart MongoDB
sudo systemctl restart mongod

Connection string:

mongodb://admin:YourSecurePassword123!@YOUR_EC2_IP:27017/myapp_db?authSource=admin

Domain Configuration

1. Purchase Domain

Choose a domain registrar:

  • Namecheap - Affordable, good support
  • GoDaddy - Popular, easy to use
  • Google Domains - Simple interface
  • AWS Route 53 - Integrated with AWS
  • Cloudflare - Free DNS, good performance

2. Configure DNS Records

2.1. For Netlify Frontend

If using Netlify DNS:

Netlify automatically configures:

A     @    75.2.60.5
CNAME www  yourdomain.netlify.app

If using external DNS:

Add these records:

# Apex domain (yourdomain.com)
Type: A
Name: @
Value: 75.2.60.5
TTL: 300

# www subdomain
Type: CNAME
Name: www
Value: random-name-12345.netlify.app
TTL: 300

Or use ALIAS/ANAME for apex:

Type: ALIAS
Name: @
Value: random-name-12345.netlify.app
TTL: 300

2.2. For AWS Backend

If using Elastic IP:

Type: A
Name: api
Value: 54.123.45.67 (your Elastic IP)
TTL: 300

If using Load Balancer:

Type: CNAME
Name: api
Value: my-backend-alb-xxxxx.us-east-1.elb.amazonaws.com
TTL: 300

Or use ALIAS (Route 53 only):

Type: A (Alias)
Name: api
Value: ALB DNS name
TTL: 300

If using API Gateway:

Type: CNAME
Name: api
Value: d-xxxxxxxxxx.execute-api.us-east-1.amazonaws.com
TTL: 300

2.3. Email Configuration (Optional)

For sending emails from your domain:

MX Records (for receiving):

Type: MX
Name: @
Priority: 10
Value: mail.yourdomain.com
TTL: 3600

SPF Record (prevent spoofing):

Type: TXT
Name: @
Value: v=spf1 include:_spf.google.com ~all
TTL: 3600

DKIM Record (email authentication):

Type: TXT
Name: default._domainkey
Value: v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA...
TTL: 3600

DMARC Record (email policy):

Type: TXT
Name: _dmarc
Value: v=DMARC1; p=quarantine; rua=mailto:[email protected]
TTL: 3600

3. Verify DNS Propagation

# Check A record
dig yourdomain.com A

# Check CNAME record
dig www.yourdomain.com CNAME
dig api.yourdomain.com CNAME

# Check from different locations
nslookup yourdomain.com 8.8.8.8  # Google DNS
nslookup yourdomain.com 1.1.1.1  # Cloudflare DNS

# Online tools
# https://dnschecker.org
# https://www.whatsmydns.net

DNS propagation typically takes:

  • Minutes to hours for most changes
  • Up to 48 hours for nameserver changes
  • Faster with lower TTL values

SSL/HTTPS Setup

1. Netlify SSL (Automatic)

Netlify automatically provisions SSL certificates via Let's Encrypt.

Enable HTTPS:

  1. Go to Domain managementHTTPS
  2. Netlify provisions certificate automatically (5-24 hours)
  3. Enable "Force HTTPS"
  4. Enable "HSTS" (recommended)

Verify SSL:

curl -I https://yourdomain.com
openssl s_client -connect yourdomain.com:443 -servername yourdomain.com

2. AWS SSL Certificates

2.1. Request Certificate in ACM (AWS Certificate Manager)

Via AWS Console:

  1. Go to ACM (Certificate Manager)
  2. Ensure you're in us-east-1 region (required for CloudFront and API Gateway)
  3. Click "Request a certificate"
  4. Certificate type:
    • Public certificate (for public domains)
    • Private certificate (for internal use)
  5. Domain names:
    api.yourdomain.com
    *.yourdomain.com (wildcard - optional)
    
  6. Validation method:
    • DNS validation (recommended, automatic renewal)
    • Email validation (manual)
  7. Click "Request"

2.2. Validate Certificate

DNS Validation:

  1. ACM provides CNAME records for validation
  2. Add CNAME record to your DNS:
    Type: CNAME
    Name: _xxxxx.api.yourdomain.com
    Value: _xxxxx.acm-validations.aws
    TTL: 300
    
  3. Or click "Create records in Route 53" (if using Route 53)
  4. Wait for validation (5-30 minutes)
  5. Status changes to "Issued"

Email Validation:

  1. ACM sends emails to domain administrators
  2. Click validation link in email
  3. Certificate status changes to "Issued"

2.3. Use Certificate

For EC2 with Nginx:

Export certificate from ACM is not possible. Use Let's Encrypt instead (see EC2 section).

For Load Balancer:

  1. Go to EC2Load Balancers → Select your ALB
  2. ListenersAdd listener
  3. Protocol: HTTPS
  4. Port: 443
  5. Default SSL certificate: Select from ACM
  6. Security policy: ELBSecurityPolicy-2016-08
  7. Default action: Forward to target group
  8. Save

For API Gateway:

  1. Go to API GatewayCustom domain names
  2. Select your domain
  3. Configurations:
    • ACM certificate: Select your certificate
    • Endpoint type: Regional or Edge
  4. Save

For CloudFront:

  1. Go to CloudFrontDistributions → Select distribution
  2. GeneralEdit
  3. SSL Certificate:
    • Custom SSL Certificate
    • Select certificate from ACM (must be in us-east-1)
  4. Save

3. SSL Best Practices

3.1. Force HTTPS Redirect

Nginx:

server {
    listen 80;
    server_name api.yourdomain.com;
    return 301 https://$server_name$request_uri;
}

Load Balancer:

  • Listener HTTP:80 → Redirect to HTTPS:443

Netlify:

  • Enable "Force HTTPS" in settings

3.2. HSTS (HTTP Strict Transport Security)

Nginx:

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

Netlify:

  • Enable HSTS in Domain settings

Load Balancer:

  • Add custom header in target group attributes

3.3. Certificate Renewal

Let's Encrypt (Nginx/EC2):

  • Auto-renews via Certbot systemd timer
  • Verify: sudo systemctl status certbot.timer

ACM:

  • Auto-renews if using DNS validation
  • No action required

Manual renewal (Let's Encrypt):

sudo certbot renew
sudo systemctl reload nginx

3.4. Test SSL Configuration

Use SSL testing tools:

SSL Labs:

https://www.ssllabs.com/ssltest/analyze.html?d=api.yourdomain.com

Target: A or A+ rating

Common issues:

  • Weak cipher suites
  • Missing intermediate certificates
  • Incorrect certificate chain
  • No HSTS header
  • Vulnerable to known attacks

Environment Variables Configuration

1. Frontend Environment Variables (Netlify)

1.1. Add Variables in Netlify UI

  1. Site configurationEnvironment variables
  2. Click "Add a variable"
  3. Add key-value pairs:

Common variables:

REACT_APP_API_URL = https://api.yourdomain.com
REACT_APP_ENV = production
REACT_APP_GA_TRACKING_ID = UA-XXXXXXXXX-X
REACT_APP_STRIPE_PUBLIC_KEY = pk_live_xxxxx
REACT_APP_SENTRY_DSN = https://[email protected]/xxxxx

For Next.js:

NEXT_PUBLIC_API_URL = https://api.yourdomain.com
NEXT_PUBLIC_ENV = production
NEXT_PUBLIC_ANALYTICS_ID = G-XXXXXXXXXX

For Vue:

VUE_APP_API_URL = https://api.yourdomain.com
VUE_APP_ENV = production

1.2. Deploy Context Variables

Set different values for different contexts:

Production:
  REACT_APP_API_URL = https://api.yourdomain.com

Deploy Previews:
  REACT_APP_API_URL = https://staging-api.yourdomain.com

Branch Deploys (dev):
  REACT_APP_API_URL = https://dev-api.yourdomain.com

1.3. Sensitive Variables

⚠️ Never expose in frontend:

  • Database credentials
  • API secret keys (only public keys)
  • AWS access keys
  • Private API keys

Use backend for sensitive operations.

2. Backend Environment Variables (AWS)

2.1. EC2 Environment Variables

Method 1: .env file

# Create .env file
nano /var/www/backend/.env

# Add variables
NODE_ENV=production
PORT=3000
DB_HOST=my-db.xxxxx.rds.amazonaws.com
DB_PASSWORD=SecurePassword123!
JWT_SECRET=your-secret-key-min-32-chars

Secure file:

chmod 600 /var/www/backend/.env

Method 2: Export in shell

# Add to .bashrc or .profile
export NODE_ENV=production
export DB_HOST=my-db.xxxxx.rds.amazonaws.com

Method 3: systemd environment file

Create /etc/environment.d/backend.conf:

NODE_ENV=production
DB_HOST=my-db.xxxxx.rds.amazonaws.com

Or in systemd service file:

[Service]
EnvironmentFile=/var/www/backend/.env

2.2. Elastic Beanstalk Environment Variables

Via EB CLI:

eb setenv \
  NODE_ENV=production \
  DB_HOST=my-db.xxxxx.rds.amazonaws.com \
  DB_PASSWORD=SecurePassword123! \
  JWT_SECRET=your-secret-key

Via AWS Console:

  1. Elastic BeanstalkEnvironments → Select environment
  2. ConfigurationSoftwareEdit
  3. Environment properties → Add variables
  4. Apply

2.3. Lambda Environment Variables

In serverless.yml:

provider:
  environment:
    NODE_ENV: production
    DB_HOST: ${env:DB_HOST}
    JWT_SECRET: ${env:JWT_SECRET}

Via AWS Console:

  1. LambdaFunctions → Select function
  2. ConfigurationEnvironment variablesEdit
  3. Add key-value pairs
  4. Save

2.4. ECS Environment Variables

In task definition JSON:

{
  "containerDefinitions": [
    {
      "environment": [
        {
          "name": "NODE_ENV",
          "value": "production"
        },
        {
          "name": "PORT",
          "value": "3000"
        }
      ],
      "secrets": [
        {
          "name": "DB_PASSWORD",
          "valueFrom": "arn:aws:secretsmanager:region:account:secret:name"
        }
      ]
    }
  ]
}

3. AWS Secrets Manager (Recommended for Production)

3.1. Create Secret

Via AWS Console:

  1. Go to Secrets ManagerSecretsStore a new secret
  2. Secret type:
    • Credentials for RDS database
    • Other type of secret (custom)
  3. Key/value pairs:
    DB_PASSWORD: SecurePassword123!
    JWT_SECRET: your-jwt-secret-key
    API_KEY: your-api-key
    
  4. Secret name: prod/backend/config
  5. Encryption key: Default (or custom KMS key)
  6. Rotation: Enable (optional)
  7. Store

3.2. Access Secret in Application

Node.js:

const AWS = require('aws-sdk');
const secretsManager = new AWS.SecretsManager({ region: 'us-east-1' });

async function getSecret(secretName) {
  try {
    const data = await secretsManager.getSecretValue({ 
      SecretId: secretName 
    }).promise();
    
    return JSON.parse(data.SecretString);
  } catch (error) {
    console.error('Error retrieving secret:', error);
    throw error;
  }
}

// Use in app initialization
(async () => {
  const secrets = await getSecret('prod/backend/config');
  
  process.env.DB_PASSWORD = secrets.DB_PASSWORD;
  process.env.JWT_SECRET = secrets.JWT_SECRET;
  
  // Start application
  startApp();
})();

Python:

import boto3
import json

def get_secret(secret_name, region='us-east-1'):
    client = boto3.client('secretsmanager', region_name=region)
    
    try:
        response = client.get_secret_value(SecretId=secret_name)
        return json.loads(response['SecretString'])
    except Exception as e:
        print(f'Error retrieving secret: {e}')
        raise

# Use in app
secrets = get_secret('prod/backend/config')
db_password = secrets['DB_PASSWORD']
jwt_secret = secrets['JWT_SECRET']

3.3. IAM Permissions for Secrets Manager

Add to EC2/Lambda IAM role:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "secretsmanager:GetSecretValue",
        "secretsmanager:DescribeSecret"
      ],
      "Resource": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/backend/*"
    }
  ]
}

4. Environment Variable Best Practices

4.1. Naming Conventions

# Use UPPER_CASE with underscores
DATABASE_URL
API_KEY
JWT_SECRET

# Prefix by framework (frontend)
REACT_APP_API_URL
NEXT_PUBLIC_API_URL
VUE_APP_API_URL
VITE_API_URL

# Organize by category
DB_HOST
DB_PORT
DB_NAME
DB_USERNAME
DB_PASSWORD

AWS_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY

SMTP_HOST
SMTP_PORT
SMTP_USER
SMTP_PASSWORD

4.2. Security

✓ Never commit .env files to git
✓ Use Secrets Manager for sensitive data
✓ Rotate secrets regularly
✓ Use different secrets for each environment
✓ Limit IAM permissions to minimum required
✓ Never log secret values
✓ Use encrypted connections for secret retrieval
✗ Don't hardcode secrets in code
✗ Don't expose secrets in frontend
✗ Don't use same secrets across environments

4.3. Validation

Validate environment variables on app startup:

const requiredEnvVars = [
  'NODE_ENV',
  'PORT',
  'DB_HOST',
  'DB_PASSWORD',
  'JWT_SECRET',
  'FRONTEND_URL'
];

for (const envVar of requiredEnvVars) {
  if (!process.env[envVar]) {
    console.error(`Missing required environment variable: ${envVar}`);
    process.exit(1);
  }
}

console.log('✓ All required environment variables are set');

CORS Configuration

1. Understanding CORS

Cross-Origin Resource Sharing (CORS) allows frontend (Netlify) to make requests to backend (AWS) on different domains.

Same-Origin Policy blocks requests like:

Frontend:  https://yourdomain.com
Backend:   https://api.yourdomain.com  ← Blocked without CORS

2. Backend CORS Configuration

2.1. Node.js (Express)

Install cors package:

npm install cors

Basic configuration:

const express = require('express');
const cors = require('cors');

const app = express();

// Enable CORS for all origins (development only)
app.use(cors());

// Start server
app.listen(3000);

Production configuration:

const express = require('express');
const cors = require('cors');

const app = express();

// Configure CORS
const corsOptions = {
  origin: [
    'https://yourdomain.com',
    'https://www.yourdomain.com'
  ],
  credentials: true,
  optionsSuccessStatus: 200,
  methods: ['GET', 'POST', 'PUT', 'DELETE', 'PATCH', 'OPTIONS'],
  allowedHeaders: [
    'Content-Type',
    'Authorization',
    'X-Requested-With',
    'Accept',
    'Origin'
  ],
  exposedHeaders: ['Content-Range', 'X-Content-Range'],
  maxAge: 86400 // 24 hours
};

app.use(cors(corsOptions));

// Handle preflight requests
app.options('*', cors(corsOptions));

app.listen(3000);

Dynamic origin (environment-based):

const allowedOrigins = process.env.CORS_ORIGINS 
  ? process.env.CORS_ORIGINS.split(',')
  : ['http://localhost:3000'];

const corsOptions = {
  origin: function (origin, callback) {
    // Allow requests with no origin (mobile apps, curl, etc.)
    if (!origin) return callback(null, true);
    
    if (allowedOrigins.indexOf(origin) !== -1) {
      callback(null, true);
    } else {
      callback(new Error('Not allowed by CORS'));
    }
  },
  credentials: true
};

app.use(cors(corsOptions));

2.2. Node.js (NestJS)

main.ts:

import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  // Enable CORS
  app.enableCors({
    origin: [
      'https://yourdomain.com',
      'https://www.yourdomain.com',
      /\.yourdomain\.com$/  // Regex for all subdomains
    ],
    credentials: true,
    methods: ['GET', 'POST', 'PUT', 'DELETE', 'PATCH', 'OPTIONS'],
    allowedHeaders: [
      'Content-Type',
      'Authorization',
      'X-Requested-With'
    ]
  });

  await app.listen(3000);
}
bootstrap();

2.3. Python (Flask)

Install flask-cors:

pip install flask-cors
from flask import Flask
from flask_cors import CORS

app = Flask(__name__)

# Enable CORS
CORS(app, origins=[
    'https://yourdomain.com',
    'https://www.yourdomain.com'
], supports_credentials=True)

# Or with more options
CORS(app, resources={
    r"/api/*": {
        "origins": ["https://yourdomain.com"],
        "methods": ["GET", "POST", "PUT", "DELETE"],
        "allow_headers": ["Content-Type", "Authorization"],
        "expose_headers": ["Content-Range"],
        "max_age": 86400
    }
})

@app.route('/api/users')
def get_users():
    return {'users': []}

if __name__ == '__main__':
    app.run()

2.4. Python (Django)

Install django-cors-headers:

pip install django-cors-headers

settings.py:

INSTALLED_APPS = [
    ...
    'corsheaders',
]

MIDDLEWARE = [
    'corsheaders.middleware.CorsMiddleware',  # Must be before CommonMiddleware
    'django.middleware.common.CommonMiddleware',
    ...
]

# CORS settings
CORS_ALLOWED_ORIGINS = [
    'https://yourdomain.com',
    'https://www.yourdomain.com',
]

CORS_ALLOW_CREDENTIALS = True

CORS_ALLOW_METHODS = [
    'GET',
    'POST',
    'PUT',
    'PATCH',
    'DELETE',
    'OPTIONS',
]

CORS_ALLOW_HEADERS = [
    'accept',
    'accept-encoding',
    'authorization',
    'content-type',
    'dnt',
    'origin',
    'user-agent',
    'x-csrftoken',
    'x-requested-with',
]

2.5. Python (FastAPI)

from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware

app = FastAPI()

# Configure CORS
app.add_middleware(
    CORSMiddleware,
    allow_origins=[
        "https://yourdomain.com",
        "https://www.yourdomain.com"
    ],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
    expose_headers=["Content-Range"],
    max_age=86400
)

@app.get("/api/users")
async def get_users():
    return {"users": []}

2.6. Go (Gin Framework)

package main

import (
    "github.com/gin-gonic/gin"
    "github.com/gin-contrib/cors"
)

func main() {
    router := gin.Default()

    // Configure CORS
    config := cors.Config{
        AllowOrigins:     []string{
            "https://yourdomain.com",
            "https://www.yourdomain.com",
        },
        AllowMethods:     []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"},
        AllowHeaders:     []string{"Origin", "Content-Type", "Authorization"},
        ExposeHeaders:    []string{"Content-Length"},
        AllowCredentials: true,
        MaxAge:           86400,
    }
    router.Use(cors.New(config))

    router.GET("/api/users", func(c *gin.Context) {
        c.JSON(200, gin.H{"users": []string{}})
    })

    router.Run(":3000")
}

3. Nginx CORS Configuration

If using Nginx as reverse proxy, you can handle CORS there:

server {
    listen 443 ssl http2;
    server_name api.yourdomain.com;

    location / {
        # Proxy to backend
        proxy_pass http://localhost:3000;

        # CORS headers
        add_header 'Access-Control-Allow-Origin' 'https://yourdomain.com' always;
        add_header 'Access-Control-Allow-Credentials' 'true' always;
        add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
        add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always;
        add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;

        # Handle preflight requests
        if ($request_method = 'OPTIONS') {
            add_header 'Access-Control-Allow-Origin' 'https://yourdomain.com' always;
            add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
            add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always;
            add_header 'Access-Control-Max-Age' 86400;
            add_header 'Content-Type' 'text/plain; charset=utf-8';
            add_header 'Content-Length' 0;
            return 204;
        }

        # Standard proxy headers
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

4. AWS API Gateway CORS

Enable CORS in API Gateway:

  1. Go to API Gateway → Select your API
  2. Select resource/method
  3. ActionsEnable CORS
  4. Configure:
    Access-Control-Allow-Origins: https://yourdomain.com
    Access-Control-Allow-Headers: Content-Type,Authorization
    Access-Control-Allow-Methods: GET,POST,PUT,DELETE,OPTIONS
    Access-Control-Allow-Credentials: true
    
  5. Enable CORS and replace existing headers
  6. Deploy API

Manual CORS configuration:

Add OPTIONS method:

exports.handler = async (event) => {
    return {
        statusCode: 200,
        headers: {
            'Access-Control-Allow-Origin': 'https://yourdomain.com',
            'Access-Control-Allow-Headers': 'Content-Type,Authorization',
            'Access-Control-Allow-Methods': 'GET,POST,PUT,DELETE,OPTIONS',
            'Access-Control-Allow-Credentials': 'true'
        },
        body: ''
    };
};

5. Testing CORS

5.1. Browser DevTools

// Test from browser console (on yourdomain.com)
fetch('https://api.yourdomain.com/api/users', {
    method: 'GET',
    headers: {
        'Content-Type': 'application/json'
    },
    credentials: 'include'
})
.then(res => res.json())
.then(data => console.log(data))
.catch(err => console.error('CORS error:', err));

Check Network tab for:

  • Preflight OPTIONS request
  • Response headers (Access-Control-Allow-Origin, etc.)
  • CORS errors in console

5.2. curl Testing

# Test simple request
curl -H "Origin: https://yourdomain.com" \
     -H "Access-Control-Request-Method: GET" \
     -H "Access-Control-Request-Headers: Content-Type" \
     -X OPTIONS \
     https://api.yourdomain.com/api/users \
     -v

# Check for headers in response:
# Access-Control-Allow-Origin: https://yourdomain.com
# Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS

5.3. Online CORS Testers

6. Common CORS Issues and Solutions

Issue 1: "No 'Access-Control-Allow-Origin' header"

Solution: Add CORS middleware to backend

Issue 2: "CORS policy: Credentials flag is 'true', but Access-Control-Allow-Credentials is missing"

Solution: Add credentials: true to CORS config

Issue 3: "CORS preflight request returns 401 Unauthorized"

Solution: Allow OPTIONS requests without authentication

Issue 4: "Wildcard '*' cannot be used when credentials are true"

Solution: Specify exact origin instead of '*'

Issue 5: Custom headers blocked

Solution: Add headers to Access-Control-Allow-Headers


Testing the Deployment

1. Functional Testing

1.1. Frontend Testing Checklist

✓ Homepage loads correctly
✓ All pages accessible (no 404 errors)
✓ Navigation works (all links)
✓ Images load properly
✓ CSS styles applied correctly
✓ JavaScript bundles load
✓ Forms validate and submit
✓ Client-side routing works (SPA)
✓ 404 page displays for invalid routes
✓ Meta tags correct (SEO)
✓ Favicon displays
✓ Mobile responsive design
✓ Different browsers (Chrome, Firefox, Safari, Edge)
✓ Different devices (desktop, tablet, mobile)

1.2. Backend Testing Checklist

✓ Health endpoint responds: /health or /api/health
✓ All API endpoints respond correctly
✓ Authentication works (login, signup, logout)
✓ Authorization works (protected routes)
✓ Database connections established
✓ Database queries work correctly
✓ File uploads work (if applicable)
✓ Email sending works (if applicable)
✓ Third-party API integrations work
✓ Error handling works correctly
✓ Rate limiting works (if implemented)
✓ CORS headers present
✓ HTTPS redirect works
✓ SSL certificate valid

1.3. Integration Testing

Test frontend-backend integration:

// Test API connectivity
const testAPI = async () => {
  try {
    // Health check
    const healthRes = await fetch('https://api.yourdomain.com/health');
    console.log('Health:', await healthRes.json());

    // GET request
    const usersRes = await fetch('https://api.yourdomain.com/api/users');
    console.log('Users:', await usersRes.json());

    // POST request
    const loginRes = await fetch('https://api.yourdomain.com/api/auth/login', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        email: '[email protected]',
        password: 'password123'
      })
    });
    console.log('Login:', await loginRes.json());

    console.log('✓ All tests passed');
  } catch (error) {
    console.error('✗ Test failed:', error);
  }
};

testAPI();

2. Performance Testing

2.1. Frontend Performance

Use Lighthouse (Chrome DevTools):

  1. Open Chrome DevTools (F12)
  2. Go to Lighthouse tab
  3. Select categories: Performance, Accessibility, Best Practices, SEO
  4. Click Analyze page load

Target scores:

  • Performance: > 90
  • Accessibility: > 90
  • Best Practices: > 90
  • SEO: > 90

Key metrics:

  • First Contentful Paint (FCP): < 1.8s
  • Largest Contentful Paint (LCP): < 2.5s
  • Time to Interactive (TTI): < 3.8s
  • Total Blocking Time (TBT): < 200ms
  • Cumulative Layout Shift (CLS): < 0.1

Optimization tips:

  • Minimize JavaScript bundles
  • Lazy load images
  • Use code splitting
  • Enable compression
  • Leverage browser caching
  • Use CDN (Netlify provides this)
  • Optimize images (WebP format)
  • Preload critical resources

GTmetrix:

Test at: https://gtmetrix.com/

Provides:

  • Performance scores
  • Page load time
  • Total page size
  • Number of requests
  • Waterfall chart
  • Recommendations

WebPageTest:

Test at: https://www.webpagetest.org/

Provides:

  • Multi-location testing
  • Connection speed simulation
  • Filmstrip view
  • Video capture
  • Detailed metrics

2.2. Backend Performance

Load Testing with Apache Bench:

# Install Apache Bench
sudo apt install apache2-utils  # Ubuntu
brew install httpd               # macOS

# Simple load test (100 requests, 10 concurrent)
ab -n 100 -c 10 https://api.yourdomain.com/api/users

# With authentication header
ab -n 1000 -c 50 -H "Authorization: Bearer YOUR_TOKEN" \
   https://api.yourdomain.com/api/users

# POST request with JSON
ab -n 100 -c 10 -p data.json -T application/json \
   https://api.yourdomain.com/api/login

Load Testing with Artillery:

Install:

npm install -g artillery

Create load-test.yml:

config:
  target: 'https://api.yourdomain.com'
  phases:
    - duration: 60
      arrivalRate: 10
      name: "Warm up"
    - duration: 120
      arrivalRate: 50
      name: "Ramp up"
    - duration: 60
      arrivalRate: 100
      name: "Sustained load"
  defaults:
    headers:
      Content-Type: 'application/json'

scenarios:
  - name: "Get users"
    flow:
      - get:
          url: "/api/users"
  
  - name: "Login and get profile"
    flow:
      - post:
          url: "/api/auth/login"
          json:
            email: "[email protected]"
            password: "password123"
          capture:
            - json: "$.token"
              as: "token"
      - get:
          url: "/api/profile"
          headers:
            Authorization: "Bearer {{ token }}"

Run test:

artillery run load-test.yml

Load Testing with k6:

Install:

# macOS
brew install k6

# Ubuntu
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
sudo apt-get update
sudo apt-get install k6

Create load-test.js:

import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
  stages: [
    { duration: '30s', target: 20 },  // Ramp up
    { duration: '1m', target: 50 },   // Stay at 50
    { duration: '30s', target: 0 },   // Ramp down
  ],
  thresholds: {
    http_req_duration: ['p(95)<500'], // 95% of requests under 500ms
    http_req_failed: ['rate<0.01'],   // Error rate under 1%
  },
};

export default function () {
  // Test GET endpoint
  let res = http.get('https://api.yourdomain.com/api/users');
  check(res, {
    'status is 200': (r) => r.status === 200,
    'response time < 500ms': (r) => r.timings.duration < 500,
  });

  sleep(1);
}

Run test:

k6 run load-test.js

2.3. Database Performance

Monitor query performance:

PostgreSQL:

-- Enable query logging
ALTER SYSTEM SET log_min_duration_statement = 100; -- Log queries > 100ms
SELECT pg_reload_conf();

-- View slow queries
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;

-- Check index usage
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
ORDER BY idx_scan;

-- Analyze table
ANALYZE users;

-- Explain query plan
EXPLAIN ANALYZE SELECT * FROM users WHERE email = '[email protected]';

MySQL:

-- Enable slow query log
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 0.1;

-- View slow queries
SELECT * FROM mysql.slow_log
ORDER BY query_time DESC
LIMIT 10;

-- Explain query
EXPLAIN SELECT * FROM users WHERE email = '[email protected]';

3. Security Testing

3.1. SSL/TLS Testing

SSL Labs:

https://www.ssllabs.com/ssltest/analyze.html?d=yourdomain.com
https://www.ssllabs.com/ssltest/analyze.html?d=api.yourdomain.com

Target: A or A+ rating

Check certificate:

# Check certificate expiration
echo | openssl s_client -servername api.yourdomain.com \
       -connect api.yourdomain.com:443 2>/dev/null | \
       openssl x509 -noout -dates

# Check certificate details
echo | openssl s_client -servername api.yourdomain.com \
       -connect api.yourdomain.com:443 2>/dev/null | \
       openssl x509 -noout -text

3.2. Security Headers

Test security headers:

Use: https://securityheaders.com/

Check for:

  • Content-Security-Policy
  • X-Frame-Options
  • X-Content-Type-Options
  • Strict-Transport-Security
  • Referrer-Policy
  • Permissions-Policy

Add headers in Nginx:

add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline';" always;
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;

Add headers in Netlify (netlify.toml):

[[headers]]
  for = "/*"
  [headers.values]
    X-Frame-Options = "SAMEORIGIN"
    X-Content-Type-Options = "nosniff"
    X-XSS-Protection = "1; mode=block"
    Referrer-Policy = "strict-origin-when-cross-origin"
    Content-Security-Policy = "default-src 'self'; script-src 'self' 'unsafe-inline';"

3.3. Vulnerability Scanning

OWASP ZAP:

Download: https://www.zaproxy.org/

  1. Open ZAP
  2. Automated Scan → Enter URL
  3. Attack → Start scan
  4. Review findings and fix issues

npm audit (Node.js):

# Scan for vulnerabilities
npm audit

# Fix vulnerabilities
npm audit fix

# Force fix (may break)
npm audit fix --force

Snyk:

# Install Snyk CLI
npm install -g snyk

# Authenticate
snyk auth

# Test for vulnerabilities
snyk test

# Monitor project
snyk monitor

3.4. Penetration Testing

Common tests:

# SQL Injection
curl "https://api.yourdomain.com/api/users?id=1' OR '1'='1"

# XSS
curl "https://api.yourdomain.com/api/search?q=<script>alert('XSS')</script>"

# Path Traversal
curl "https://api.yourdomain.com/api/files?path=../../etc/passwd"

# Authentication bypass
curl "https://api.yourdomain.com/api/admin" -H "Authorization: Bearer fake_token"

4. Monitoring Setup

4.1. Uptime Monitoring

UptimeRobot (Free):

  1. Sign up: https://uptimerobot.com/
  2. Add New Monitor:
    Monitor Type: HTTP(s)
    Friendly Name: My Website
    URL: https://yourdomain.com
    Monitoring Interval: 5 minutes
    
  3. Add alert contacts (email, SMS, Slack)

Pingdom:

Similar to UptimeRobot, with more features.

AWS CloudWatch Synthetics:

Create canary to monitor:

# Via AWS Console
# CloudWatch → Synthetics → Create canary
# Choose blueprint: Heartbeat monitoring
# Enter URL: https://yourdomain.com
# Schedule: Every 5 minutes

4.2. Error Tracking

Sentry:

Install:

npm install @sentry/node  # Backend
npm install @sentry/react # Frontend

Backend (Node.js):

const Sentry = require('@sentry/node');

Sentry.init({
  dsn: 'https://[email protected]/xxxxx',
  environment: process.env.NODE_ENV,
  tracesSampleRate: 1.0,
});

// Capture errors
app.use(Sentry.Handlers.errorHandler());

Frontend (React):

import * as Sentry from '@sentry/react';

Sentry.init({
  dsn: 'https://[email protected]/xxxxx',
  environment: process.env.NODE_ENV,
  integrations: [new Sentry.BrowserTracing()],
  tracesSampleRate: 1.0,
});

4.3. Performance Monitoring

AWS CloudWatch:

Automatically monitors:

  • EC2 metrics (CPU, memory, disk, network)
  • RDS metrics (connections, queries, storage)
  • Lambda metrics (invocations, duration, errors)
  • Load balancer metrics (requests, latency, errors)

New Relic:

Install agent:

npm install newrelic

Configure newrelic.js:

exports.config = {
  app_name: ['My Backend API'],
  license_key: 'YOUR_LICENSE_KEY',
  logging: {
    level: 'info'
  }
};

Require in app:

require('newrelic');
const express = require('express');
// ... rest of app

4.4. Log Aggregation

AWS CloudWatch Logs:

Already configured for:

  • Lambda functions (automatic)
  • ECS tasks (via awslogs driver)
  • EC2 (via CloudWatch agent)

View logs:

# Install CloudWatch Logs CLI
pip install awslogs

# View logs
awslogs get /aws/lambda/my-function --start='1h ago'
awslogs get /ecs/my-app --watch

Logtail (formerly Timber.io):

Install:

npm install @logtail/node
const { Logtail } = require('@logtail/node');
const logtail = new Logtail('YOUR_SOURCE_TOKEN');

logtail.info('Application started');
logtail.error('Error occurred', { error: err });

(Document continues in DEPLOYMENT_NETLIFY_AWS_PART3.md due to length...)

Complete Deployment Manual: Netlify + AWS (Part 3 - Final)

This is the final continuation of the deployment manual


CI/CD Pipeline Setup

1. GitHub Actions CI/CD

1.1. Create Workflow File

Create .github/workflows/deploy.yml:

name: Deploy to Production

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

env:
  NODE_VERSION: '20'
  AWS_REGION: us-east-1

jobs:
  # Frontend deployment (Netlify)
  deploy-frontend:
    name: Deploy Frontend to Netlify
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Setup Node.js
        uses: actions/setup-node@v3
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Run tests
        run: npm test

      - name: Build application
        env:
          REACT_APP_API_URL: ${{ secrets.REACT_APP_API_URL }}
        run: npm run build

      - name: Deploy to Netlify
        uses: netlify/actions/cli@master
        with:
          args: deploy --prod --dir=build
        env:
          NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
          NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}

  # Backend deployment (EC2)
  deploy-backend-ec2:
    name: Deploy Backend to EC2
    runs-on: ubuntu-latest
    needs: deploy-frontend
    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Setup Node.js
        uses: actions/setup-node@v3
        with:
          node-version: ${{ env.NODE_VERSION }}

      - name: Run tests
        run: |
          cd backend
          npm ci
          npm test

      - name: Deploy to EC2
        uses: appleboy/ssh-action@master
        with:
          host: ${{ secrets.EC2_HOST }}
          username: ubuntu
          key: ${{ secrets.EC2_SSH_KEY }}
          script: |
            cd /var/www/backend
            git pull origin main
            npm install --production
            npm run build
            pm2 restart backend-api

  # Backend deployment (Lambda)
  deploy-backend-lambda:
    name: Deploy Backend to Lambda
    runs-on: ubuntu-latest
    needs: deploy-frontend
    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Setup Node.js
        uses: actions/setup-node@v3
        with:
          node-version: ${{ env.NODE_VERSION }}

      - name: Install dependencies
        run: |
          cd backend
          npm ci

      - name: Deploy to Lambda
        run: |
          cd backend
          npm install -g serverless
          serverless deploy --stage prod
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

  # Database migration
  migrate-database:
    name: Run Database Migrations
    runs-on: ubuntu-latest
    needs: deploy-backend-ec2
    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Setup Node.js
        uses: actions/setup-node@v3
        with:
          node-version: ${{ env.NODE_VERSION }}

      - name: Run migrations
        run: |
          cd backend
          npm ci
          npm run migrate
        env:
          DB_HOST: ${{ secrets.DB_HOST }}
          DB_PORT: ${{ secrets.DB_PORT }}
          DB_NAME: ${{ secrets.DB_NAME }}
          DB_USERNAME: ${{ secrets.DB_USERNAME }}
          DB_PASSWORD: ${{ secrets.DB_PASSWORD }}

  # Smoke tests
  smoke-tests:
    name: Run Smoke Tests
    runs-on: ubuntu-latest
    needs: [deploy-frontend, deploy-backend-ec2]
    steps:
      - name: Test frontend
        run: |
          curl -f https://yourdomain.com || exit 1

      - name: Test backend health
        run: |
          curl -f https://api.yourdomain.com/health || exit 1

      - name: Test API endpoint
        run: |
          response=$(curl -s https://api.yourdomain.com/api/users)
          echo $response

1.2. Add GitHub Secrets

  1. Go to GitHub repositorySettingsSecrets and variablesActions
  2. Click "New repository secret"
  3. Add secrets:
NETLIFY_AUTH_TOKEN = your_netlify_personal_access_token
NETLIFY_SITE_ID = your_netlify_site_id
EC2_HOST = 54.123.45.67
EC2_SSH_KEY = (paste your SSH private key)
AWS_ACCESS_KEY_ID = AKIA...
AWS_SECRET_ACCESS_KEY = your_secret_key
DB_HOST = your-db.xxxxx.rds.amazonaws.com
DB_PORT = 5432
DB_NAME = myapp_db
DB_USERNAME = dbadmin
DB_PASSWORD = your_db_password
REACT_APP_API_URL = https://api.yourdomain.com

Get Netlify tokens:

# Get Netlify personal access token
# Visit: https://app.netlify.com/user/applications#personal-access-tokens

# Get site ID
netlify sites:list
# Or from Netlify dashboard: Site settings → General → Site details → API ID

2. GitLab CI/CD

Create .gitlab-ci.yml:

image: node:20

stages:
  - test
  - build
  - deploy

variables:
  NODE_ENV: production

# Cache node modules
cache:
  paths:
    - frontend/node_modules/
    - backend/node_modules/

# Test frontend
test-frontend:
  stage: test
  script:
    - cd frontend
    - npm ci
    - npm test
  only:
    - main
    - merge_requests

# Test backend
test-backend:
  stage: test
  script:
    - cd backend
    - npm ci
    - npm test
  only:
    - main
    - merge_requests

# Build frontend
build-frontend:
  stage: build
  script:
    - cd frontend
    - npm ci
    - npm run build
  artifacts:
    paths:
      - frontend/build/
  only:
    - main

# Deploy frontend to Netlify
deploy-frontend:
  stage: deploy
  image: node:20
  before_script:
    - npm install -g netlify-cli
  script:
    - cd frontend
    - netlify deploy --prod --dir=build --auth=$NETLIFY_AUTH_TOKEN --site=$NETLIFY_SITE_ID
  only:
    - main
  environment:
    name: production
    url: https://yourdomain.com

# Deploy backend to EC2
deploy-backend:
  stage: deploy
  image: alpine:latest
  before_script:
    - apk add --no-cache openssh-client
    - eval $(ssh-agent -s)
    - echo "$EC2_SSH_KEY" | tr -d '\r' | ssh-add -
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
    - ssh-keyscan -H $EC2_HOST >> ~/.ssh/known_hosts
  script:
    - ssh ubuntu@$EC2_HOST "
        cd /var/www/backend &&
        git pull origin main &&
        npm install --production &&
        npm run build &&
        pm2 restart backend-api
      "
  only:
    - main
  environment:
    name: production
    url: https://api.yourdomain.com

Add GitLab CI/CD Variables:

  1. Go to ProjectSettingsCI/CDVariables
  2. Add variables (same as GitHub secrets)

3. Bitbucket Pipelines

Create bitbucket-pipelines.yml:

image: node:20

definitions:
  caches:
    npm: ~/.npm

pipelines:
  default:
    - step:
        name: Test Frontend
        caches:
          - npm
        script:
          - cd frontend
          - npm ci
          - npm test

    - step:
        name: Test Backend
        caches:
          - npm
        script:
          - cd backend
          - npm ci
          - npm test

  branches:
    main:
      - step:
          name: Build Frontend
          caches:
            - npm
          script:
            - cd frontend
            - npm ci
            - npm run build
          artifacts:
            - frontend/build/**

      - step:
          name: Deploy to Netlify
          script:
            - npm install -g netlify-cli
            - cd frontend
            - netlify deploy --prod --dir=build --auth=$NETLIFY_AUTH_TOKEN --site=$NETLIFY_SITE_ID

      - step:
          name: Deploy Backend to EC2
          script:
            - pipe: atlassian/ssh-run:0.4.1
              variables:
                SSH_USER: 'ubuntu'
                SERVER: $EC2_HOST
                SSH_KEY: $EC2_SSH_KEY
                COMMAND: >
                  cd /var/www/backend &&
                  git pull origin main &&
                  npm install --production &&
                  npm run build &&
                  pm2 restart backend-api

4. Automated Testing in CI/CD

4.1. Unit Tests

Frontend (Jest + React Testing Library):

// src/components/Button.test.js
import { render, screen, fireEvent } from '@testing-library/react';
import Button from './Button';

test('renders button with text', () => {
  render(<Button>Click me</Button>);
  const button = screen.getByText(/click me/i);
  expect(button).toBeInTheDocument();
});

test('calls onClick when clicked', () => {
  const handleClick = jest.fn();
  render(<Button onClick={handleClick}>Click me</Button>);
  fireEvent.click(screen.getByText(/click me/i));
  expect(handleClick).toHaveBeenCalledTimes(1);
});

Backend (Jest):

// tests/api/users.test.js
const request = require('supertest');
const app = require('../app');

describe('GET /api/users', () => {
  it('should return list of users', async () => {
    const res = await request(app)
      .get('/api/users')
      .expect('Content-Type', /json/)
      .expect(200);

    expect(res.body).toHaveProperty('users');
    expect(Array.isArray(res.body.users)).toBe(true);
  });
});

describe('POST /api/auth/login', () => {
  it('should login with valid credentials', async () => {
    const res = await request(app)
      .post('/api/auth/login')
      .send({
        email: '[email protected]',
        password: 'password123'
      })
      .expect(200);

    expect(res.body).toHaveProperty('token');
  });

  it('should reject invalid credentials', async () => {
    const res = await request(app)
      .post('/api/auth/login')
      .send({
        email: '[email protected]',
        password: 'wrong'
      })
      .expect(401);

    expect(res.body).toHaveProperty('error');
  });
});

4.2. Integration Tests

// tests/integration/user-flow.test.js
const request = require('supertest');
const app = require('../app');

describe('User registration and login flow', () => {
  let authToken;

  it('should register new user', async () => {
    const res = await request(app)
      .post('/api/auth/register')
      .send({
        email: '[email protected]',
        password: 'password123',
        username: 'newuser'
      })
      .expect(201);

    expect(res.body).toHaveProperty('user');
  });

  it('should login user', async () => {
    const res = await request(app)
      .post('/api/auth/login')
      .send({
        email: '[email protected]',
        password: 'password123'
      })
      .expect(200);

    authToken = res.body.token;
    expect(authToken).toBeDefined();
  });

  it('should access protected route with token', async () => {
    const res = await request(app)
      .get('/api/profile')
      .set('Authorization', `Bearer ${authToken}`)
      .expect(200);

    expect(res.body).toHaveProperty('email', '[email protected]');
  });
});

4.3. E2E Tests (Playwright)

Install Playwright:

npm install -D @playwright/test
npx playwright install

Create tests/e2e/login.spec.js:

const { test, expect } = require('@playwright/test');

test.describe('Login flow', () => {
  test('should login successfully', async ({ page }) => {
    // Navigate to login page
    await page.goto('https://yourdomain.com/login');

    // Fill form
    await page.fill('input[name="email"]', '[email protected]');
    await page.fill('input[name="password"]', 'password123');

    // Click login button
    await page.click('button[type="submit"]');

    // Wait for navigation
    await page.waitForURL('https://yourdomain.com/dashboard');

    // Verify logged in
    await expect(page.locator('h1')).toContainText('Dashboard');
  });

  test('should show error for invalid credentials', async ({ page }) => {
    await page.goto('https://yourdomain.com/login');

    await page.fill('input[name="email"]', '[email protected]');
    await page.fill('input[name="password"]', 'wrong');
    await page.click('button[type="submit"]');

    await expect(page.locator('.error')).toContainText('Invalid credentials');
  });
});

Run in CI:

- name: Run E2E tests
  run: npx playwright test

Monitoring and Logging

1. CloudWatch Monitoring

1.1. EC2 Metrics

Default metrics (5-minute intervals):

  • CPUUtilization
  • DiskReadOps, DiskWriteOps
  • NetworkIn, NetworkOut
  • StatusCheckFailed

Enable detailed monitoring (1-minute intervals):

# Via AWS CLI
aws ec2 monitor-instances --instance-ids i-1234567890abcdef0

Install CloudWatch Agent for custom metrics:

# Download CloudWatch agent
wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb

# Install
sudo dpkg -i amazon-cloudwatch-agent.deb

# Configure
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard

# Start agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
  -a fetch-config \
  -m ec2 \
  -s \
  -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json

Create config manually (/opt/aws/amazon-cloudwatch-agent/bin/config.json):

{
  "agent": {
    "metrics_collection_interval": 60,
    "run_as_user": "root"
  },
  "metrics": {
    "namespace": "MyApp",
    "metrics_collected": {
      "mem": {
        "measurement": [
          {
            "name": "mem_used_percent",
            "rename": "MemoryUtilization",
            "unit": "Percent"
          }
        ],
        "metrics_collection_interval": 60
      },
      "disk": {
        "measurement": [
          {
            "name": "used_percent",
            "rename": "DiskUtilization",
            "unit": "Percent"
          }
        ],
        "metrics_collection_interval": 60,
        "resources": ["*"]
      }
    }
  },
  "logs": {
    "logs_collected": {
      "files": {
        "collect_list": [
          {
            "file_path": "/var/log/backend/app.log",
            "log_group_name": "/backend/app",
            "log_stream_name": "{instance_id}"
          },
          {
            "file_path": "/var/log/nginx/error.log",
            "log_group_name": "/nginx/error",
            "log_stream_name": "{instance_id}"
          }
        ]
      }
    }
  }
}

1.2. RDS Monitoring

Enhanced Monitoring:

Enabled during RDS creation or:

aws rds modify-db-instance \
  --db-instance-identifier my-app-db \
  --monitoring-interval 60 \
  --monitoring-role-arn arn:aws:iam::123456789012:role/rds-monitoring-role

Performance Insights:

  1. Go to RDSDatabases → Select database
  2. ConfigurationMonitoring → Enable Performance Insights
  3. Retention period: 7 days (free) or longer (paid)

View metrics:

  • CPU utilization
  • Database connections
  • Read/Write IOPS
  • Freeable memory
  • Disk queue depth

1.3. Lambda Monitoring

Automatic metrics:

  • Invocations
  • Duration
  • Errors
  • Throttles
  • Concurrent executions

Add custom metrics:

const AWS = require('aws-sdk');
const cloudwatch = new AWS.CloudWatch();

async function putMetric(metricName, value) {
  await cloudwatch.putMetricData({
    Namespace: 'MyApp/Lambda',
    MetricData: [
      {
        MetricName: metricName,
        Value: value,
        Unit: 'Count',
        Timestamp: new Date()
      }
    ]
  }).promise();
}

// Usage in Lambda
exports.handler = async (event) => {
  await putMetric('CustomMetric', 1);
  // ... rest of handler
};

1.4. Create CloudWatch Alarms

CPU Alarm (EC2):

aws cloudwatch put-metric-alarm \
  --alarm-name high-cpu-utilization \
  --alarm-description "Alarm when CPU exceeds 80%" \
  --metric-name CPUUtilization \
  --namespace AWS/EC2 \
  --statistic Average \
  --period 300 \
  --threshold 80 \
  --comparison-operator GreaterThanThreshold \
  --evaluation-periods 2 \
  --dimensions Name=InstanceId,Value=i-1234567890abcdef0 \
  --alarm-actions arn:aws:sns:us-east-1:123456789012:admin-alerts

Database Connections Alarm (RDS):

aws cloudwatch put-metric-alarm \
  --alarm-name high-db-connections \
  --metric-name DatabaseConnections \
  --namespace AWS/RDS \
  --statistic Average \
  --period 60 \
  --threshold 80 \
  --comparison-operator GreaterThanThreshold \
  --evaluation-periods 2 \
  --dimensions Name=DBInstanceIdentifier,Value=my-app-db \
  --alarm-actions arn:aws:sns:us-east-1:123456789012:admin-alerts

Lambda Errors Alarm:

aws cloudwatch put-metric-alarm \
  --alarm-name lambda-errors \
  --metric-name Errors \
  --namespace AWS/Lambda \
  --statistic Sum \
  --period 60 \
  --threshold 5 \
  --comparison-operator GreaterThanThreshold \
  --evaluation-periods 1 \
  --dimensions Name=FunctionName,Value=my-function \
  --alarm-actions arn:aws:sns:us-east-1:123456789012:admin-alerts

1.5. CloudWatch Dashboards

Create custom dashboard:

  1. Go to CloudWatchDashboardsCreate dashboard
  2. Add widgets:
    • Line graph (CPU, Memory, Network)
    • Number (Current connections)
    • Log insights query results

Via AWS CLI:

aws cloudwatch put-dashboard \
  --dashboard-name MyAppDashboard \
  --dashboard-body file://dashboard.json

dashboard.json:

{
  "widgets": [
    {
      "type": "metric",
      "properties": {
        "metrics": [
          ["AWS/EC2", "CPUUtilization", {"stat": "Average"}]
        ],
        "period": 300,
        "stat": "Average",
        "region": "us-east-1",
        "title": "EC2 CPU Utilization"
      }
    }
  ]
}

2. Application Logging

2.1. Structured Logging (Node.js with Winston)

Install Winston:

npm install winston

Create logger.js:

const winston = require('winston');

const logger = winston.createLogger({
  level: process.env.LOG_LEVEL || 'info',
  format: winston.format.combine(
    winston.format.timestamp(),
    winston.format.errors({ stack: true }),
    winston.format.json()
  ),
  defaultMeta: {
    service: 'backend-api',
    environment: process.env.NODE_ENV
  },
  transports: [
    // Console output
    new winston.transports.Console({
      format: winston.format.combine(
        winston.format.colorize(),
        winston.format.simple()
      )
    }),
    // File output - errors
    new winston.transports.File({
      filename: '/var/log/backend/error.log',
      level: 'error',
      maxsize: 10485760, // 10MB
      maxFiles: 5
    }),
    // File output - all logs
    new winston.transports.File({
      filename: '/var/log/backend/combined.log',
      maxsize: 10485760,
      maxFiles: 10
    })
  ]
});

module.exports = logger;

Use in application:

const logger = require('./logger');

// Info log
logger.info('User logged in', { userId: 123, email: '[email protected]' });

// Error log
logger.error('Database connection failed', { error: err.message, stack: err.stack });

// Warning
logger.warn('High memory usage', { memoryUsage: process.memoryUsage() });

// Debug (only in development)
logger.debug('API request received', { method: 'GET', path: '/api/users' });

2.2. Request Logging Middleware

const logger = require('./logger');

function requestLogger(req, res, next) {
  const start = Date.now();

  res.on('finish', () => {
    const duration = Date.now() - start;

    logger.info('HTTP Request', {
      method: req.method,
      url: req.url,
      statusCode: res.statusCode,
      duration: `${duration}ms`,
      userAgent: req.get('user-agent'),
      ip: req.ip,
      userId: req.user?.id
    });
  });

  next();
}

app.use(requestLogger);

2.3. Error Logging Middleware

function errorLogger(err, req, res, next) {
  logger.error('Unhandled error', {
    error: err.message,
    stack: err.stack,
    url: req.url,
    method: req.method,
    body: req.body,
    userId: req.user?.id
  });

  res.status(500).json({
    error: 'Internal server error',
    message: process.env.NODE_ENV === 'development' ? err.message : undefined
  });
}

app.use(errorLogger);

3. Log Analysis

3.1. CloudWatch Logs Insights

Query examples:

Find errors in last hour:

fields @timestamp, @message, error
| filter @message like /ERROR/
| sort @timestamp desc
| limit 100

Count requests by status code:

fields statusCode
| stats count() by statusCode
| sort count desc

Average response time:

fields duration
| stats avg(duration) as avg_duration, max(duration) as max_duration
| sort avg_duration desc

Find slow queries:

fields @timestamp, url, duration
| filter duration > 1000
| sort duration desc
| limit 50

3.2. ELK Stack (Elasticsearch, Logstash, Kibana)

Install on EC2:

# Install Java
sudo apt install -y openjdk-11-jdk

# Add Elastic repository
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list

# Install Elasticsearch
sudo apt update
sudo apt install -y elasticsearch

# Configure Elasticsearch
sudo nano /etc/elasticsearch/elasticsearch.yml
# Set: network.host: localhost

# Start Elasticsearch
sudo systemctl start elasticsearch
sudo systemctl enable elasticsearch

# Install Kibana
sudo apt install -y kibana

# Configure Kibana
sudo nano /etc/kibana/kibana.yml
# Set: server.host: "0.0.0.0"

# Start Kibana
sudo systemctl start kibana
sudo systemctl enable kibana

# Install Logstash
sudo apt install -y logstash

# Create Logstash config
sudo nano /etc/logstash/conf.d/backend.conf

Logstash config:

input {
  file {
    path => "/var/log/backend/combined.log"
    codec => "json"
  }
}

filter {
  # Parse and enrich logs
  if [level] == "error" {
    mutate {
      add_tag => ["error"]
    }
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "backend-logs-%{+YYYY.MM.dd}"
  }
}

Start Logstash:

sudo systemctl start logstash
sudo systemctl enable logstash

Access Kibana: http://YOUR_EC2_IP:5601


Backup and Disaster Recovery

1. Database Backups

1.1. RDS Automated Backups

Configure during creation or modify:

aws rds modify-db-instance \
  --db-instance-identifier my-app-db \
  --backup-retention-period 7 \
  --preferred-backup-window "03:00-04:00" \
  --apply-immediately

Settings:

  • Retention: 1-35 days (7-14 recommended for production)
  • Backup window: Low-traffic time
  • Point-in-time recovery: Automatically enabled

1.2. Manual Snapshots

Create snapshot:

aws rds create-db-snapshot \
  --db-instance-identifier my-app-db \
  --db-snapshot-identifier my-app-db-snapshot-$(date +%Y%m%d)

Restore from snapshot:

aws rds restore-db-instance-from-db-snapshot \
  --db-instance-identifier my-app-db-restored \
  --db-snapshot-identifier my-app-db-snapshot-20250119

1.3. Export to S3

PostgreSQL:

# Export database
pg_dump -h my-app-db.xxxxx.rds.amazonaws.com \
        -U dbadmin \
        -d myapp_db \
        -F c \
        -f backup-$(date +%Y%m%d).dump

# Upload to S3
aws s3 cp backup-$(date +%Y%m%d).dump s3://my-backups/database/

# Automated script
#!/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_FILE="backup-$DATE.dump"

pg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME -F c -f $BACKUP_FILE
aws s3 cp $BACKUP_FILE s3://my-backups/database/
rm $BACKUP_FILE

# Delete old backups (keep 30 days)
aws s3 ls s3://my-backups/database/ | while read -r line; do
  createDate=$(echo $line | awk {'print $1" "$2'})
  createDate=$(date -d "$createDate" +%s)
  olderThan=$(date -d "30 days ago" +%s)
  if [[ $createDate -lt $olderThan ]]; then
    fileName=$(echo $line | awk {'print $4'})
    aws s3 rm s3://my-backups/database/$fileName
  fi
done

Schedule with cron:

crontab -e

# Daily backup at 2 AM
0 2 * * * /home/ubuntu/scripts/backup-db.sh

2. Application Backups

2.1. EC2 AMI (Amazon Machine Image)

Create AMI:

aws ec2 create-image \
  --instance-id i-1234567890abcdef0 \
  --name "backend-ami-$(date +%Y%m%d)" \
  --description "Backend server backup" \
  --no-reboot

Launch from AMI:

aws ec2 run-instances \
  --image-id ami-xxxxxxxxx \
  --instance-type t3.small \
  --key-name my-backend-key \
  --security-group-ids sg-xxxxx \
  --subnet-id subnet-xxxxx

2.2. EBS Snapshots

Create snapshot:

aws ec2 create-snapshot \
  --volume-id vol-xxxxxxxxx \
  --description "Backend volume backup $(date +%Y%m%d)"

Automated snapshots with Data Lifecycle Manager:

  1. Go to EC2Lifecycle ManagerCreate policy
  2. Configure:
    • Resource type: Volume
    • Target tags: Environment=production
    • Schedule: Daily at 2:00 AM
    • Retention: 7 days

2.3. Application Code Backup

Git repository (already backed up):

  • GitHub/GitLab/Bitbucket provides automatic backups
  • Ensure code is committed and pushed regularly

Environment files backup:

# Backup .env and configs to S3
aws s3 cp /var/www/backend/.env s3://my-backups/configs/backend-env-$(date +%Y%m%d)
aws s3 cp /etc/nginx/sites-available/backend s3://my-backups/configs/nginx-config-$(date +%Y%m%d)

# Encrypt sensitive files
gpg -c /var/www/backend/.env
aws s3 cp /var/www/backend/.env.gpg s3://my-backups/configs/

3. Disaster Recovery Plan

3.1. Recovery Time Objective (RTO) and Recovery Point Objective (RPO)

Define targets:

  • RTO: Maximum acceptable downtime (e.g., 4 hours)
  • RPO: Maximum acceptable data loss (e.g., 1 hour)

3.2. Multi-Region Setup (Advanced)

Database replication:

# Create read replica in different region
aws rds create-db-instance-read-replica \
  --db-instance-identifier my-app-db-replica \
  --source-db-instance-identifier my-app-db \
  --source-region us-east-1 \
  --region eu-west-1

S3 cross-region replication:

Enable versioning and replication:

# Enable versioning on source bucket
aws s3api put-bucket-versioning \
  --bucket my-app-uploads \
  --versioning-configuration Status=Enabled

# Create replication configuration
{
  "Role": "arn:aws:iam::123456789012:role/s3-replication-role",
  "Rules": [
    {
      "Status": "Enabled",
      "Priority": 1,
      "Filter": {},
      "Destination": {
        "Bucket": "arn:aws:s3:::my-app-uploads-replica",
        "ReplicationTime": {
          "Status": "Enabled",
          "Time": {
            "Minutes": 15
          }
        }
      }
    }
  ]
}

3.3. Backup Testing

Monthly recovery drill:

  1. Restore database from snapshot to test instance
  2. Verify data integrity
  3. Test application connectivity
  4. Document recovery time
  5. Update procedures if needed

Checklist:

□ Database restored successfully
□ All tables present
□ Data up-to-date (within RPO)
□ Application connects to restored DB
□ All features work correctly
□ Performance acceptable
□ Security configurations correct
□ Recovery time within RTO

Performance Optimization

1. Frontend Optimization

1.1. Code Splitting

React (using React.lazy):

import React, { lazy, Suspense } from 'react';

// Lazy load components
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Profile = lazy(() => import('./pages/Profile'));

function App() {
  return (
    <Suspense fallback={<div>Loading...</div>}>
      <Routes>
        <Route path="/dashboard" element={<Dashboard />} />
        <Route path="/profile" element={<Profile />} />
      </Routes>
    </Suspense>
  );
}

Next.js (automatic):

Next.js automatically code-splits each page.

Manual optimization:

import dynamic from 'next/dynamic';

const HeavyComponent = dynamic(() => import('../components/HeavyComponent'), {
  loading: () => <p>Loading...</p>,
  ssr: false  // Disable server-side rendering for this component
});

1.2. Image Optimization

Use Next.js Image component:

import Image from 'next/image';

<Image
  src="/hero.jpg"
  alt="Hero image"
  width={1200}
  height={600}
  priority  // Preload important images
  placeholder="blur"  // Show blur while loading
/>

Lazy load images (vanilla JS):

<img 
  src="placeholder.jpg" 
  data-src="actual-image.jpg" 
  loading="lazy"
  alt="Description"
/>

Use WebP format:

# Convert images to WebP
for img in *.jpg; do
  cwebp -q 80 "$img" -o "${img%.jpg}.webp"
done

Serve with picture element:

<picture>
  <source srcset="image.webp" type="image/webp">
  <source srcset="image.jpg" type="image/jpeg">
  <img src="image.jpg" alt="Fallback">
</picture>

1.3. Minification and Compression

Netlify automatic optimization:

Enable in Site configurationBuild & deployPost processing:

  • Bundle CSS
  • Minify CSS
  • Minify JS
  • Compress images

Manual compression (gzip/brotli):

Already configured in Netlify. For custom server:

# Nginx gzip configuration
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript application/json application/javascript;

# Brotli (requires module)
brotli on;
brotli_types text/plain text/css text/xml text/javascript application/json;

1.4. Caching Strategy

Service Worker (PWA):

// service-worker.js
const CACHE_NAME = 'my-app-v1';
const urlsToCache = [
  '/',
  '/static/css/main.css',
  '/static/js/main.js'
];

self.addEventListener('install', (event) => {
  event.waitUntil(
    caches.open(CACHE_NAME)
      .then((cache) => cache.addAll(urlsToCache))
  );
});

self.addEventListener('fetch', (event) => {
  event.respondWith(
    caches.match(event.request)
      .then((response) => response || fetch(event.request))
  );
});

Browser caching headers (Netlify):

[[headers]]
  for = "/*.js"
  [headers.values]
    Cache-Control = "public, max-age=31536000, immutable"

[[headers]]
  for = "/*.css"
  [headers.values]
    Cache-Control = "public, max-age=31536000, immutable"

[[headers]]
  for = "/images/*"
  [headers.values]
    Cache-Control = "public, max-age=2592000"  # 30 days

2. Backend Optimization

2.1. Database Query Optimization

Add indexes:

-- Find missing indexes
SELECT schemaname, tablename, indexname
FROM pg_indexes
WHERE schemaname = 'public';

-- Create indexes on frequently queried columns
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_posts_user_id ON posts(user_id);
CREATE INDEX idx_posts_created_at ON posts(created_at DESC);

-- Composite index for multiple columns
CREATE INDEX idx_posts_user_date ON posts(user_id, created_at DESC);

-- Partial index (only for specific condition)
CREATE INDEX idx_published_posts ON posts(created_at) WHERE published = true;

Use query explain:

EXPLAIN ANALYZE SELECT * FROM posts WHERE user_id = 123 ORDER BY created_at DESC LIMIT 10;

Optimize N+1 queries:

Bad (N+1):

// Fetches users, then for each user fetches posts (N queries)
const users = await User.findAll();
for (const user of users) {
  user.posts = await Post.findAll({ where: { userId: user.id } });
}

Good (eager loading):

// Single query with JOIN
const users = await User.findAll({
  include: [{ model: Post }]
});

2.2. Caching

Redis caching:

Install Redis:

npm install redis

Implement caching:

const redis = require('redis');
const client = redis.createClient({
  host: process.env.REDIS_HOST,
  port: process.env.REDIS_PORT,
  password: process.env.REDIS_PASSWORD
});

// Cache middleware
async function cacheMiddleware(req, res, next) {
  const key = `cache:${req.url}`;

  try {
    const cached = await client.get(key);
    if (cached) {
      return res.json(JSON.parse(cached));
    }
    next();
  } catch (err) {
    next();
  }
}

// Route with caching
app.get('/api/users', cacheMiddleware, async (req, res) => {
  const users = await User.findAll();
  
  // Cache for 5 minutes
  await client.setex(`cache:${req.url}`, 300, JSON.stringify(users));
  
  res.json(users);
});

In-memory caching (Node.js):

const NodeCache = require('node-cache');
const cache = new NodeCache({ stdTTL: 300 }); // 5 minutes TTL

function getCachedData(key, fetchFn) {
  const cached = cache.get(key);
  if (cached) {
    return Promise.resolve(cached);
  }

  return fetchFn().then((data) => {
    cache.set(key, data);
    return data;
  });
}

// Usage
app.get('/api/users', async (req, res) => {
  const users = await getCachedData('users', () => User.findAll());
  res.json(users);
});

2.3. Connection Pooling

PostgreSQL (pg):

const { Pool } = require('pg');

const pool = new Pool({
  host: process.env.DB_HOST,
  port: 5432,
  database: process.env.DB_NAME,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  max: 20,  // Maximum connections
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 2000,
});

// Use pool instead of creating new clients
pool.query('SELECT * FROM users', (err, res) => {
  console.log(res.rows);
});

2.4. Async Processing

Use job queues for heavy tasks:

Install Bull:

npm install bull

Create queue:

const Queue = require('bull');
const emailQueue = new Queue('email', {
  redis: {
    host: process.env.REDIS_HOST,
    port: process.env.REDIS_PORT
  }
});

// Add job to queue
app.post('/api/send-email', async (req, res) => {
  await emailQueue.add({
    to: req.body.email,
    subject: 'Welcome',
    body: 'Welcome to our app!'
  });

  res.json({ message: 'Email queued' });
});

// Process jobs
emailQueue.process(async (job) => {
  await sendEmail(job.data);
});

3. CDN and Asset Delivery

Netlify CDN (automatic):

Netlify automatically:

  • Serves assets from 200+ edge locations
  • Compresses assets (gzip/brotli)
  • Caches static files
  • Provides instant cache invalidation

CloudFront for API (optional):

  1. Create CloudFront distribution
  2. Origin: API Gateway or Load Balancer
  3. Cache behavior:
    • Cache GET requests
    • Forward headers: Authorization
    • TTL: 0-300 seconds
  4. Use CloudFront URL or custom domain

4. Load Balancing and Auto Scaling

Application Load Balancer health checks:

# Configure health check
aws elbv2 modify-target-group \
  --target-group-arn arn:aws:elasticloadbalancing:... \
  --health-check-enabled \
  --health-check-path /health \
  --health-check-interval-seconds 30 \
  --health-check-timeout-seconds 5 \
  --healthy-threshold-count 2 \
  --unhealthy-threshold-count 3

Auto Scaling Group:

# Create launch template
aws ec2 create-launch-template \
  --launch-template-name backend-template \
  --version-description "Backend v1" \
  --launch-template-data '{
    "ImageId": "ami-xxxxxxxxx",
    "InstanceType": "t3.small",
    "KeyName": "my-backend-key",
    "SecurityGroupIds": ["sg-xxxxx"],
    "UserData": "base64-encoded-startup-script"
  }'

# Create Auto Scaling Group
aws autoscaling create-auto-scaling-group \
  --auto-scaling-group-name backend-asg \
  --launch-template LaunchTemplateName=backend-template \
  --min-size 2 \
  --max-size 10 \
  --desired-capacity 2 \
  --target-group-arns arn:aws:elasticloadbalancing:... \
  --health-check-type ELB \
  --health-check-grace-period 300 \
  --vpc-zone-identifier "subnet-xxxxx,subnet-yyyyy"

# Create scaling policy
aws autoscaling put-scaling-policy \
  --auto-scaling-group-name backend-asg \
  --policy-name scale-on-cpu \
  --policy-type TargetTrackingScaling \
  --target-tracking-configuration '{
    "PredefinedMetricSpecification": {
      "PredefinedMetricType": "ASGAverageCPUUtilization"
    },
    "TargetValue": 70.0
  }'

Security Best Practices

1. Frontend Security

1.1. Content Security Policy (CSP)

Netlify configuration (netlify.toml):

[[headers]]
  for = "/*"
  [headers.values]
    Content-Security-Policy = """
      default-src 'self';
      script-src 'self' 'unsafe-inline' 'unsafe-eval' https://cdn.jsdelivr.net;
      style-src 'self' 'unsafe-inline' https://fonts.googleapis.com;
      font-src 'self' https://fonts.gstatic.com;
      img-src 'self' data: https: blob:;
      connect-src 'self' https://api.yourdomain.com;
      frame-ancestors 'none';
      base-uri 'self';
      form-action 'self';
    """

React/Next.js meta tag:

<Head>
  <meta httpEquiv="Content-Security-Policy" content="default-src 'self'; script-src 'self' 'unsafe-inline';" />
</Head>

1.2. XSS Protection

Sanitize user input:

npm install dompurify
import DOMPurify from 'dompurify';

function SafeContent({ html }) {
  return <div dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(html) }} />;
}

Escape output:

function escapeHtml(unsafe) {
  return unsafe
    .replace(/&/g, "&amp;")
    .replace(/</g, "&lt;")
    .replace(/>/g, "&gt;")
    .replace(/"/g, "&quot;")
    .replace(/'/g, "&#039;");
}

1.3. CSRF Protection

Use CSRF tokens:

// Backend: Generate token
const csrf = require('csurf');
const csrfProtection = csrf({ cookie: true });

app.get('/api/form', csrfProtection, (req, res) => {
  res.json({ csrfToken: req.csrfToken() });
});

app.post('/api/submit', csrfProtection, (req, res) => {
  // Process form
});

// Frontend: Include token
const response = await fetch('/api/form');
const { csrfToken } = await response.json();

await fetch('/api/submit', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'CSRF-Token': csrfToken
  },
  body: JSON.stringify(data)
});

1.4. Prevent Clickjacking

X-Frame-Options header:

[[headers]]
  for = "/*"
  [headers.values]
    X-Frame-Options = "DENY"
    # or "SAMEORIGIN" to allow same-origin framing

2. Backend Security

2.1. Input Validation

Use validation library:

npm install joi
const Joi = require('joi');

const userSchema = Joi.object({
  email: Joi.string().email().required(),
  password: Joi.string().min(8).required(),
  username: Joi.string().alphanum().min(3).max(30).required()
});

app.post('/api/register', async (req, res) => {
  try {
    const value = await userSchema.validateAsync(req.body);
    // Process validated data
  } catch (err) {
    return res.status(400).json({ error: err.details[0].message });
  }
});

2.2. SQL Injection Prevention

Use parameterized queries:

// ✗ BAD - Vulnerable to SQL injection
const query = `SELECT * FROM users WHERE email = '${email}'`;

// ✓ GOOD - Parameterized query
const query = 'SELECT * FROM users WHERE email = $1';
const result = await pool.query(query, [email]);

// ✓ GOOD - ORM (Sequelize)
const user = await User.findOne({ where: { email: email } });

2.3. Password Security

Hash passwords with bcrypt:

npm install bcrypt
const bcrypt = require('bcrypt');

// Hash password on registration
async function hashPassword(password) {
  const saltRounds = 10;
  return await bcrypt.hash(password, saltRounds);
}

// Verify password on login
async function verifyPassword(password, hash) {
  return await bcrypt.compare(password, hash);
}

// Usage
app.post('/api/register', async (req, res) => {
  const { email, password } = req.body;
  const hashedPassword = await hashPassword(password);
  
  await User.create({ email, password: hashedPassword });
  res.json({ message: 'User created' });
});

app.post('/api/login', async (req, res) => {
  const { email, password } = req.body;
  const user = await User.findOne({ where: { email } });
  
  if (!user || !(await verifyPassword(password, user.password))) {
    return res.status(401).json({ error: 'Invalid credentials' });
  }
  
  // Generate JWT token
  const token = jwt.sign({ userId: user.id }, process.env.JWT_SECRET);
  res.json({ token });
});

2.4. JWT Security

Secure JWT implementation:

const jwt = require('jsonwebtoken');

// Generate token
function generateToken(user) {
  return jwt.sign(
    { 
      userId: user.id,
      email: user.email 
    },
    process.env.JWT_SECRET,
    { 
      expiresIn: '7d',
      issuer: 'yourdomain.com',
      audience: 'yourdomain.com'
    }
  );
}

// Verify token middleware
function authenticateToken(req, res, next) {
  const authHeader = req.headers['authorization'];
  const token = authHeader && authHeader.split(' ')[1];

  if (!token) {
    return res.status(401).json({ error: 'No token provided' });
  }

  jwt.verify(token, process.env.JWT_SECRET, (err, decoded) => {
    if (err) {
      return res.status(403).json({ error: 'Invalid token' });
    }
    req.user = decoded;
    next();
  });
}

// Protected route
app.get('/api/profile', authenticateToken, async (req, res) => {
  const user = await User.findByPk(req.user.userId);
  res.json(user);
});

2.5. Rate Limiting

Express rate limiter:

npm install express-rate-limit
const rateLimit = require('express-rate-limit');

// General rate limiter
const generalLimiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 100, // Limit each IP to 100 requests per windowMs
  message: 'Too many requests, please try again later',
  standardHeaders: true,
  legacyHeaders: false,
});

// Login rate limiter (stricter)
const loginLimiter = rateLimit({
  windowMs: 15 * 60 * 1000,
  max: 5, // 5 login attempts per 15 minutes
  skipSuccessfulRequests: true,
  message: 'Too many login attempts, please try again later'
});

// Apply to routes
app.use('/api/', generalLimiter);
app.post('/api/login', loginLimiter, loginHandler);

2.6. HTTPS Enforcement

Redirect HTTP to HTTPS:

// Express middleware
function requireHTTPS(req, res, next) {
  if (req.secure || req.headers['x-forwarded-proto'] === 'https') {
    return next();
  }
  res.redirect('https://' + req.headers.host + req.url);
}

app.use(requireHTTPS);

Nginx:

server {
    listen 80;
    server_name api.yourdomain.com;
    return 301 https://$server_name$request_uri;
}

2.7. Dependency Security

Regular audits:

# Check for vulnerabilities
npm audit

# Fix vulnerabilities
npm audit fix

# Update dependencies
npm update

# Check outdated packages
npm outdated

Use Snyk for continuous monitoring:

npm install -g snyk
snyk auth
snyk test  # Test for vulnerabilities
snyk monitor  # Continuous monitoring

3. AWS Security

3.1. IAM Best Practices

Principle of least privilege:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::my-bucket/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:GetItem",
        "dynamodb:PutItem",
        "dynamodb:Query"
      ],
      "Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/MyTable"
    }
  ]
}

Enable MFA:

  1. Go to IAMUsers → Select user
  2. Security credentialsAssigned MFA deviceManage
  3. Choose Virtual MFA device
  4. Scan QR code with authenticator app
  5. Enter two consecutive codes

Use IAM roles instead of access keys:

// No need to configure credentials
const AWS = require('aws-sdk');
const s3 = new AWS.S3(); // Automatically uses IAM role

3.2. Security Groups

Restrict SSH access:

# Allow SSH only from your IP
aws ec2 authorize-security-group-ingress \
  --group-id sg-xxxxx \
  --protocol tcp \
  --port 22 \
  --cidr YOUR_IP/32

Minimum required ports:

Port 22:  SSH (from your IP only)
Port 80:  HTTP (0.0.0.0/0)
Port 443: HTTPS (0.0.0.0/0)
Port 3000-8000: Application ports (from ALB security group only)

3.3. Secrets Management

Use AWS Secrets Manager:

const AWS = require('aws-sdk');
const secretsManager = new AWS.SecretsManager({ region: 'us-east-1' });

async function getSecret(secretName) {
  const data = await secretsManager.getSecretValue({ 
    SecretId: secretName 
  }).promise();
  
  return JSON.parse(data.SecretString);
}

// Usage
const secrets = await getSecret('prod/backend/db');
const dbPassword = secrets.password;

Rotate secrets regularly:

  1. Go to Secrets Manager → Select secret
  2. Rotation configurationEdit rotation
  3. Enable automatic rotation
  4. Choose rotation Lambda function
  5. Set rotation schedule (30-90 days)

3.4. VPC Security

Private subnets for databases:

# Create private subnet
aws ec2 create-subnet \
  --vpc-id vpc-xxxxx \
  --cidr-block 10.0.2.0/24 \
  --availability-zone us-east-1a

# Place RDS in private subnet (no internet access)
# Only accessible from EC2/Lambda in same VPC

Network ACLs:

# Create network ACL
aws ec2 create-network-acl --vpc-id vpc-xxxxx

# Add rules
aws ec2 create-network-acl-entry \
  --network-acl-id acl-xxxxx \
  --rule-number 100 \
  --protocol tcp \
  --port-range From=443,To=443 \
  --cidr-block 0.0.0.0/0 \
  --egress \
  --rule-action allow

3.5. Enable AWS Config

Track configuration changes:

  1. Go to AWS ConfigGet started
  2. Select resources to record
  3. Choose S3 bucket for storing configurations
  4. Create SNS topic for notifications
  5. Confirm

3.6. Enable AWS GuardDuty

Threat detection:

  1. Go to GuardDutyGet started
  2. Enable GuardDuty
  3. Configure findings export to S3
  4. Set up CloudWatch Events for alerts

Troubleshooting Guide

1. Frontend Issues

1.1. Build Failures

Issue: "Module not found"

Solution:

# Clear cache and reinstall
rm -rf node_modules package-lock.json
npm install

# Or use npm ci for clean install
npm ci

Issue: "Out of memory during build"

Solution:

# Increase Node memory
NODE_OPTIONS="--max-old-space-size=4096" npm run build

# Or in package.json
"scripts": {
  "build": "NODE_OPTIONS='--max-old-space-size=4096' react-scripts build"
}

Issue: Environment variables not working

Solution:

  • Ensure variables start with REACT_APP_, NEXT_PUBLIC_, or VUE_APP_
  • Rebuild after adding new variables
  • Check Netlify environment variables are set correctly
  • Restart development server after changes

1.2. Deployment Issues

Issue: 404 on page refresh (SPA)

Solution:

# Add to netlify.toml
[[redirects]]
  from = "/*"
  to = "/index.html"
  status = 200

Issue: Assets not loading

Solution:

  • Check PUBLIC_URL or publicPath configuration
  • Verify asset paths are relative
  • Check browser console for CORS errors
  • Ensure assets are in public/ or static/ folder

Issue: Deploy preview not updating

Solution:

  • Clear Netlify cache: DeploysTrigger deployClear cache and deploy site
  • Check build logs for errors
  • Verify git branch is correct

1.3. API Connection Issues

Issue: CORS errors

Solution:

// Backend: Add CORS headers
app.use(cors({
  origin: 'https://yourdomain.com',
  credentials: true
}));

// Frontend: Include credentials
fetch('https://api.yourdomain.com/api/users', {
  credentials: 'include'
});

Issue: "Failed to fetch" or "Network error"

Solution:

  • Check API URL in environment variables
  • Verify backend is running and accessible
  • Check browser console for specific error
  • Test API with curl:
    curl -v https://api.yourdomain.com/health
  • Check SSL certificate is valid

2. Backend Issues

2.1. EC2 Connection Issues

Issue: Cannot SSH into EC2

Solution:

# Check security group allows SSH from your IP
aws ec2 describe-security-groups --group-ids sg-xxxxx

# Verify key permissions
chmod 400 ~/.ssh/my-key.pem

# Check instance is running
aws ec2 describe-instances --instance-ids i-xxxxx

# Use EC2 Instance Connect (browser-based)
# AWS Console → EC2 → Instance → Connect

Issue: "Connection timeout"

Solution:

  • Check security group inbound rules
  • Verify instance has public IP
  • Check Network ACLs
  • Verify route table has internet gateway

2.2. Application Errors

Issue: Application not starting

Solution:

# Check application logs
pm2 logs backend-api

# Or systemd logs
sudo journalctl -u backend -n 100 --no-pager

# Check port is not already in use
sudo lsof -i :3000

# Kill process on port
sudo kill -9 $(sudo lsof -t -i:3000)

# Check environment variables
printenv | grep DB_

Issue: High memory usage

Solution:

# Check memory usage
free -h

# Restart application
pm2 restart backend-api

# Increase swap space
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

# Make permanent
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

Issue: Database connection errors

Solution:

# Test database connection
psql -h your-db.rds.amazonaws.com -U dbadmin -d myapp_db

# Check security group allows connection
# Verify database is running
# Check credentials in .env file
# Test network connectivity
telnet your-db.rds.amazonaws.com 5432

2.3. Nginx Issues

Issue: "502 Bad Gateway"

Solution:

# Check backend is running
curl http://localhost:3000/health

# Check Nginx error logs
sudo tail -f /var/log/nginx/error.log

# Test Nginx configuration
sudo nginx -t

# Restart Nginx
sudo systemctl restart nginx

# Check Nginx status
sudo systemctl status nginx

Issue: "413 Request Entity Too Large"

Solution:

# Increase client_max_body_size in Nginx
http {
    client_max_body_size 50M;
}

# Or in server block
server {
    client_max_body_size 50M;
}

Issue: SSL certificate errors

Solution:

# Renew certificate
sudo certbot renew

# Check certificate expiration
sudo certbot certificates

# Force renewal
sudo certbot renew --force-renewal

# Restart Nginx after renewal
sudo systemctl reload nginx

2.4. Lambda Issues

Issue: "Task timed out after X seconds"

Solution:

  • Increase timeout in serverless.yml:
    functions:
      api:
        timeout: 30  # Maximum 900 seconds (15 minutes)
  • Optimize code to run faster
  • Use Lambda layers for dependencies
  • Consider switching to EC2 for long-running tasks

Issue: Cold start latency

Solution:

  • Use provisioned concurrency
  • Reduce package size
  • Use Lambda layers
  • Keep Lambda warm with scheduled pings

Issue: "Missing IAM permissions"

Solution:

# Add IAM permissions in serverless.yml
provider:
  iam:
    role:
      statements:
        - Effect: Allow
          Action:
            - s3:GetObject
          Resource: "arn:aws:s3:::my-bucket/*"

3. Database Issues

3.1. RDS Connection Issues

Issue: "Cannot connect to database"

Solution:

  • Check security group allows connections from your IP/EC2
  • Verify database is publicly accessible (if needed)
  • Check VPC and subnet configuration
  • Test with psql/mysql client
  • Verify credentials

Issue: "Too many connections"

Solution:

-- Check current connections
SELECT count(*) FROM pg_stat_activity;

-- Kill idle connections
SELECT pg_terminate_backend(pid) 
FROM pg_stat_activity 
WHERE state = 'idle' 
AND state_change < current_timestamp - INTERVAL '10 minutes';

-- Increase max_connections (requires restart)
ALTER SYSTEM SET max_connections = 200;

Or modify RDS parameter group:

  1. Go to RDSParameter groups
  2. Edit parameter group
  3. Change max_connections to higher value
  4. Reboot instance

Issue: Slow query performance

Solution:

-- Find slow queries
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;

-- Add missing indexes
CREATE INDEX idx_column ON table(column);

-- Analyze tables
ANALYZE table_name;

-- Vacuum tables
VACUUM ANALYZE;

3.2. DynamoDB Issues

Issue: "ProvisionedThroughputExceededException"

Solution:

  • Switch to on-demand capacity mode
  • Increase provisioned capacity
  • Implement exponential backoff retry logic
  • Use DynamoDB auto scaling

Issue: High costs

Solution:

  • Use on-demand for unpredictable workloads
  • Use provisioned capacity for predictable workloads
  • Implement DynamoDB auto scaling
  • Archive old data to S3
  • Use DynamoDB Standard-IA for infrequent access

4. Monitoring and Alerts

4.1. Setup CloudWatch Alarms

CPU alarm:

aws cloudwatch put-metric-alarm \
  --alarm-name high-cpu \
  --alarm-description "CPU > 80%" \
  --metric-name CPUUtilization \
  --namespace AWS/EC2 \
  --statistic Average \
  --period 300 \
  --threshold 80 \
  --comparison-operator GreaterThanThreshold \
  --evaluation-periods 2 \
  --dimensions Name=InstanceId,Value=i-xxxxx \
  --alarm-actions arn:aws:sns:us-east-1:123456789012:alerts

Disk usage alarm:

aws cloudwatch put-metric-alarm \
  --alarm-name high-disk-usage \
  --metric-name DiskSpaceUtilization \
  --namespace System/Linux \
  --statistic Average \
  --period 300 \
  --threshold 85 \
  --comparison-operator GreaterThanThreshold \
  --evaluation-periods 1

4.2. Setup SNS for Alerts

# Create SNS topic
aws sns create-topic --name server-alerts

# Subscribe email
aws sns subscribe \
  --topic-arn arn:aws:sns:us-east-1:123456789012:server-alerts \
  --protocol email \
  --notification-endpoint [email protected]

# Confirm subscription via email

Cost Optimization Strategies

1. Frontend Costs (Netlify)

Free tier includes:

  • 100 GB bandwidth/month
  • 300 build minutes/month
  • Unlimited sites
  • Deploy previews

Optimization:

  • Use image optimization
  • Enable asset compression
  • Leverage CDN caching
  • Monitor bandwidth usage

Upgrade when needed:

  • Pro plan: $19/month (more bandwidth)
  • Business plan: $99/month (SSO, analytics)

2. Backend Costs (AWS)

2.1. EC2 Cost Optimization

Use Reserved Instances:

# 30-60% savings for 1-3 year commitment
aws ec2 purchase-reserved-instances-offering \
  --reserved-instances-offering-id offering-id \
  --instance-count 1

Use Spot Instances (for non-critical workloads):

# Up to 90% savings
aws ec2 request-spot-instances \
  --spot-price "0.05" \
  --instance-count 1 \
  --type "one-time" \
  --launch-specification file://specification.json

Right-sizing:

  • Monitor CPU/memory usage
  • Downsize if consistently < 40% utilization
  • Use T3/T4g instances (burstable performance)

Stop instances when not needed:

# Stop instance (dev/test environments)
aws ec2 stop-instances --instance-ids i-xxxxx

# Start instance
aws ec2 start-instances --instance-ids i-xxxxx

2.2. RDS Cost Optimization

Use Reserved Instances:

  • 1-year: ~35% savings
  • 3-year: ~60% savings

Stop dev/test databases:

aws rds stop-db-instance --db-instance-identifier dev-db

Use Aurora Serverless (for variable workloads):

  • Pay per second
  • Auto-scales based on demand
  • Can pause when not in use

Optimize storage:

  • Use gp3 instead of gp2 (20% cheaper)
  • Enable storage auto-scaling
  • Archive old data

2.3. Lambda Cost Optimization

Optimize memory allocation:

  • More memory = more CPU = faster execution
  • Test different memory sizes
  • Use AWS Lambda Power Tuning tool

Reduce package size:

  • Remove unused dependencies
  • Use Lambda layers for common code
  • Tree-shake dependencies

Use reserved concurrency carefully:

  • Only for critical functions
  • Costs $0.000012 per GB-second

2.4. Data Transfer Costs

Minimize inter-region transfer:

  • Keep resources in same region
  • Use CloudFront for global distribution

Use VPC endpoints:

  • Access S3/DynamoDB without internet gateway
  • Avoid data transfer charges

Compress data:

  • Enable gzip/brotli compression
  • Reduce payload sizes

3. Monitoring Costs

Use AWS Cost Explorer:

  1. Go to BillingCost Explorer
  2. View costs by service, region, tag
  3. Set up cost anomaly detection
  4. Create cost budgets

Set up billing alarms:

aws cloudwatch put-metric-alarm \
  --alarm-name billing-alarm \
  --metric-name EstimatedCharges \
  --namespace AWS/Billing \
  --statistic Maximum \
  --period 21600 \
  --threshold 100 \
  --comparison-operator GreaterThanThreshold \
  --evaluation-periods 1 \
  --dimensions Name=Currency,Value=USD \
  --alarm-actions arn:aws:sns:us-east-1:123456789012:billing-alerts

Use AWS Budgets:

  1. Go to BillingBudgetsCreate budget
  2. Budget type: Cost budget
  3. Set amount: $100/month
  4. Configure alerts at 80% and 100%

Scaling Strategies

1. Horizontal Scaling

Auto Scaling Groups (EC2):

# Create scaling policy based on CPU
aws autoscaling put-scaling-policy \
  --auto-scaling-group-name backend-asg \
  --policy-name scale-on-cpu \
  --policy-type TargetTrackingScaling \
  --target-tracking-configuration '{
    "PredefinedMetricSpecification": {
      "PredefinedMetricType": "ASGAverageCPUUtilization"
    },
    "TargetValue": 70.0
  }'

Load Balancer distribution:

  • Round robin
  • Least connections
  • IP hash

2. Vertical Scaling

Upgrade instance type:

# Stop instance
aws ec2 stop-instances --instance-ids i-xxxxx

# Modify instance type
aws ec2 modify-instance-attribute \
  --instance-id i-xxxxx \
  --instance-type t3.medium

# Start instance
aws ec2 start-instances --instance-ids i-xxxxx

3. Database Scaling

Read replicas:

aws rds create-db-instance-read-replica \
  --db-instance-identifier myapp-db-replica \
  --source-db-instance-identifier myapp-db \
  --availability-zone us-east-1b

Connection pooling:

  • Use pgBouncer or RDS Proxy
  • Reduce connection overhead

Caching layer:

  • Redis/ElastiCache for frequently accessed data
  • Reduce database load

4. CDN and Caching

CloudFront for API:

  • Cache GET requests
  • Reduce backend load
  • Global distribution

Application-level caching:

  • Redis/Memcached
  • In-memory caching
  • CDN edge caching

Migration Checklist

Pre-Migration

□ Backup all data (database, files, configurations)
□ Document current architecture
□ Test deployment process in staging
□ Review DNS TTL (set to 300 seconds)
□ Prepare rollback plan
□ Notify users of potential downtime
□ Schedule during low-traffic period

Migration Steps

□ Deploy backend to AWS
□ Test backend endpoints
□ Update DNS records (point to new backend)
□ Deploy frontend to Netlify
□ Update environment variables
□ Test frontend-backend integration
□ Monitor error logs
□ Verify SSL certificates
□ Test all critical features
□ Update documentation

Post-Migration

□ Monitor application performance
□ Check error rates
□ Verify database connections
□ Test backup/restore procedures
□ Update monitoring dashboards
□ Review cost reports
□ Optimize based on metrics
□ Document lessons learned
□ Update runbooks
□ Train team on new infrastructure

Rollback Plan

1. Keep old infrastructure running for 7-14 days
2. DNS can be reverted quickly (5-minute TTL)
3. Database can be restored from backup
4. Have automation scripts ready for rollback
5. Document rollback procedures

Final Checklist

Production Readiness

Security:

✓ HTTPS enabled (frontend and backend)
✓ SSL certificates configured
✓ CORS configured correctly
✓ Rate limiting implemented
✓ Input validation on all endpoints
✓ Passwords hashed (bcrypt)
✓ JWT tokens secured
✓ Security headers configured
✓ Secrets stored securely (AWS Secrets Manager)
✓ IAM roles follow least privilege
✓ Security groups properly configured
✓ Database in private subnet (if applicable)

Performance:

✓ CDN configured (Netlify automatic)
✓ Asset compression enabled
✓ Images optimized
✓ Database indexed properly
✓ Connection pooling configured
✓ Caching implemented (Redis/CloudFront)
✓ Auto-scaling configured
✓ Load balancer health checks working

Monitoring:

✓ CloudWatch alarms configured
✓ Error tracking setup (Sentry)
✓ Uptime monitoring active
✓ Log aggregation configured
✓ Performance monitoring enabled
✓ Cost alerts configured
✓ Backup alerts configured

Reliability:

✓ Automated backups enabled
✓ Backup restoration tested
✓ Multi-AZ deployment (production)
✓ Health checks configured
✓ Graceful shutdown implemented
✓ Error handling comprehensive
✓ Retry logic for transient failures

Operations:

✓ CI/CD pipeline configured
✓ Automated testing in place
✓ Documentation complete
✓ Runbooks created
✓ On-call rotation defined
✓ Incident response plan documented
✓ Rollback procedures tested

Additional Resources

Documentation

Netlify:

AWS:

Frameworks:

Community

Forums:

  • Stack Overflow
  • Reddit r/webdev, r/aws
  • Dev.to
  • Netlify Community

Discord/Slack:

  • Reactiflux (React community)
  • Nodeiflux (Node.js community)
  • AWS Community

Learning Resources

Courses:

Certifications:

  • AWS Certified Solutions Architect
  • AWS Certified Developer
  • AWS Certified SysOps Administrator

Conclusion

This comprehensive guide covers everything needed to deploy a modern web application with:

  • Frontend hosted on Netlify (global CDN, automatic SSL, easy deployment)
  • Backend on AWS (multiple options: EC2, Elastic Beanstalk, Lambda, ECS)
  • Database on AWS (RDS, DynamoDB, DocumentDB)
  • Complete DevOps pipeline with CI/CD, monitoring, logging, and backups
  • Production-ready security, performance, and reliability

Key Takeaways

  1. Start simple: Begin with Netlify + EC2 or Elastic Beanstalk
  2. Automate early: Setup CI/CD from the beginning
  3. Monitor everything: Implement logging and monitoring on day one
  4. Security first: Never compromise on security best practices
  5. Test backups: Regularly test your backup and restore procedures
  6. Optimize costs: Right-size resources and use reserved instances
  7. Document well: Keep runbooks and documentation updated
  8. Plan for scale: Design for growth from the start

Next Steps

  1. Choose your deployment option (EC2, EB, Lambda, or ECS)
  2. Setup development environment
  3. Configure CI/CD pipeline
  4. Deploy to staging environment
  5. Test thoroughly
  6. Deploy to production
  7. Monitor and optimize
  8. Iterate and improve

Good luck with your deployment! 🚀


Document Version: 1.0
Last Updated: November 19, 2025
Maintained by: YourAKShaw Inc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment