Generic guide for deploying any modern frontend application to Netlify and any backend API to AWS
- Overview
- Prerequisites
- Architecture Overview
- Part 1: AWS Backend Deployment
- Part 2: Netlify Frontend Deployment
- Database Setup on AWS
- Domain Configuration
- SSL/HTTPS Setup
- Environment Variables Configuration
- CORS Configuration
- Testing the Deployment
- CI/CD Pipeline Setup
- Monitoring and Logging
- Backup and Disaster Recovery
- Performance Optimization
- Security Best Practices
- Troubleshooting Guide
- Cost Optimization Strategies
- Scaling Strategies
- Migration Checklist
This comprehensive guide covers deploying any modern web application with a separated architecture:
- Frontend: Static or server-rendered application deployed to Netlify
- Backend: RESTful or GraphQL API deployed to AWS
- Database: Optional database services on AWS
- Infrastructure: Production-ready setup with SSL, monitoring, and CI/CD
Frontend (Netlify):
- React (Create React App, Vite)
- Next.js (Static, SSG, SSR with Edge Functions)
- Vue.js (Vue CLI, Nuxt.js)
- Angular
- Svelte/SvelteKit
- Static HTML/CSS/JS
- Gatsby
- Astro
Backend (AWS):
- Node.js (Express, NestJS, Koa, Fastify)
- Python (Flask, Django, FastAPI)
- Go (Gin, Echo)
- Ruby (Rails, Sinatra)
- Java (Spring Boot)
- .NET Core
- PHP (Laravel, Symfony)
Benefits:
- Independent Scaling: Frontend and backend scale separately
- Global CDN: Netlify provides worldwide content delivery
- Cost-Effective: Pay only for what you use
- High Availability: Built-in redundancy and failover
- Developer Experience: Simple deployment workflows
- Security: Isolated services, better security boundaries
Use Cases:
- SaaS applications
- E-commerce platforms
- Portfolio/business websites
- Mobile app backends
- RESTful/GraphQL APIs
- Microservices architectures
-
Netlify Account
- Sign up at app.netlify.com/signup
- Free tier available
- Credit card optional (required for Pro features)
-
AWS Account
- Sign up at aws.amazon.com
- Credit card required
- Free tier available for 12 months
- Enable billing alerts immediately
-
Git Provider Account
- GitHub, GitLab, or Bitbucket
- Repository for your application code
-
Domain Name (Optional but Recommended)
- Purchase from GoDaddy, Namecheap, Google Domains, etc.
- Can also use AWS Route 53
Basic:
- Git version control
- Command line/terminal usage
- Environment variables concept
- HTTP/HTTPS basics
- DNS basics
Intermediate:
- Your chosen frontend framework
- Your chosen backend framework
- RESTful API concepts
- Database basics (if using)
Advanced (Optional):
- Docker containerization
- Infrastructure as Code
- AWS IAM and security
- CI/CD pipelines
Install these tools on your local machine:
# Git (Version Control)
# Download from: https://git-scm.com/downloads
git --version
# Node.js and npm (Even if not using Node backend)
# Download from: https://nodejs.org/
node --version # v18+ recommended
npm --version
# AWS CLI (Optional but recommended)
# Download from: https://aws.amazon.com/cli/
aws --version
# Docker (Optional, for containerized deployments)
# Download from: https://www.docker.com/
docker --version
# Your framework-specific CLI tools
# Examples:
npm install -g @angular/cli # Angular
npm install -g create-react-app # React
npm install -g @vue/cli # Vue
pip install awsebcli # Elastic BeanstalkAWS IAM Permissions:
- EC2 (if using Option A or B)
- Elastic Beanstalk (if using Option B)
- Lambda, API Gateway (if using Option C)
- ECS, ECR (if using Option D)
- RDS (if using database)
- S3 (for file storage)
- CloudWatch (for logging)
- IAM (for creating roles)
- VPC (for networking)
Recommendation: Create an IAM user with AdministratorAccess for initial setup, then restrict permissions later.
┌─────────────────────────────────────────────────────────────────┐
│ End Users │
└──────────────────────┬──────────────────────────────────────────┘
│
▼
┌──────────────────────────────────┐
│ DNS (Route 53/GoDaddy) │
└──────────────┬───────────────────┘
│
├────────────────────┬──────────────────────┐
│ │ │
▼ ▼ ▼
┏━━━━━━━━━━━━━━━━━━━━┓ ┏━━━━━━━━━━━━━━━━━━━┓ ┏━━━━━━━━━━━━━┓
┃ Netlify CDN ┃ ┃ AWS Backend ┃ ┃ Database ┃
┃ (Frontend) ┃ ┃ (API Server) ┃ ┃ (AWS RDS) ┃
┗━━━━━━━━━━━━━━━━━━━━┛ ┗━━━━━━━━━━━━━━━━━━━┛ ┗━━━━━━━━━━━━━┛
│ │ │
│ HTTPS API │ Database │
│ Requests │ Connection │
└────────────────────┴──────────────────────┘
┌────────────────────────────────────────────────────────────────────┐
│ USER BROWSER │
└───────────────────────────┬────────────────────────────────────────┘
│ HTTPS
▼
┌────────────────────────────────────────────────────────────────────┐
│ NETLIFY CDN (Global) │
│ ┌──────────────────────────────────────────────────────────────┐ │
│ │ Frontend Application │ │
│ │ • Static Assets (HTML, CSS, JS, Images) │ │
│ │ • Build Output (Webpack/Vite/etc.) │ │
│ │ • Edge Functions (Optional) │ │
│ │ • Form Handlers (Optional) │ │
│ │ • Serverless Functions (Optional) │ │
│ └──────────────────────────────────────────────────────────────┘ │
│ │
│ Features: │
│ • Automatic SSL/TLS │
│ • Global CDN (200+ locations) │
│ • Instant rollback │
│ • Deploy previews │
│ • Branch deploys │
└───────────────────────────┬────────────────────────────────────────┘
│ HTTPS API Calls
▼
┌────────────────────────────────────────────────────────────────────┐
│ AWS CLOUD (Region) │
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ Application Load Balancer (Optional) │ │
│ │ • SSL Termination │ │
│ │ • Health Checks │ │
│ │ • Traffic Distribution │ │
│ └────────────────────┬────────────────────────────────────────┘ │
│ │ │
│ ┌────────────────────┴────────────────────────────────────────┐ │
│ │ Backend Application │ │
│ │ │ │
│ │ Option A: EC2 Instances │ │
│ │ ┌────────────────────────────────────────────┐ │ │
│ │ │ • Ubuntu/Amazon Linux Server │ │ │
│ │ │ • PM2/Systemd Process Manager │ │ │
│ │ │ • Nginx Reverse Proxy │ │ │
│ │ │ • Application Code │ │ │
│ │ └────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ Option B: Elastic Beanstalk │ │
│ │ ┌────────────────────────────────────────────┐ │ │
│ │ │ • Managed EC2 Instances │ │ │
│ │ │ • Auto Scaling Groups │ │ │
│ │ │ • Load Balancer │ │ │
│ │ │ • Monitoring & Health Checks │ │ │
│ │ └────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ Option C: Lambda Functions │ │
│ │ ┌────────────────────────────────────────────┐ │ │
│ │ │ • Serverless Functions │ │ │
│ │ │ • API Gateway Integration │ │ │
│ │ │ • Auto Scaling │ │ │
│ │ │ • Pay-per-Request │ │ │
│ │ └────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ Option D: ECS Fargate │ │
│ │ ┌────────────────────────────────────────────┐ │ │
│ │ │ • Containerized Application │ │ │
│ │ │ • Docker Images (ECR) │ │ │
│ │ │ • Task Definitions │ │ │
│ │ │ • Service Auto Scaling │ │ │
│ │ └────────────────────────────────────────────┘ │ │
│ └──────────────────────┬──────────────────────────────────────┘ │
│ │ │
│ ┌──────────────────────┴──────────────────────────────────────┐ │
│ │ Data Layer │ │
│ │ │ │
│ │ Database Options: │ │
│ │ ┌────────────────────────────────────────────┐ │ │
│ │ │ • RDS (PostgreSQL, MySQL, etc.) │ │ │
│ │ │ • DynamoDB (NoSQL) │ │ │
│ │ │ • DocumentDB (MongoDB compatible) │ │ │
│ │ │ • ElastiCache (Redis/Memcached) │ │ │
│ │ │ • Aurora (Serverless SQL) │ │ │
│ │ └────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ Storage Options: │ │
│ │ ┌────────────────────────────────────────────┐ │ │
│ │ │ • S3 (Object Storage) │ │ │
│ │ │ • EFS (File System) │ │ │
│ │ │ • EBS (Block Storage) │ │ │
│ │ └────────────────────────────────────────────┘ │ │
│ └──────────────────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────────────────┐ │
│ │ Supporting Services │ │
│ │ • CloudWatch (Logs & Metrics) │ │
│ │ • CloudFront (Optional CDN for API) │ │
│ │ • SES (Email Service) │ │
│ │ • SNS/SQS (Messaging) │ │
│ │ • Secrets Manager (Credentials) │ │
│ │ • WAF (Web Application Firewall) │ │
│ └──────────────────────────────────────────────────────────────┘ │
└────────────────────────────────────────────────────────────────────┘
-
User Request
- User accesses
https://yourdomain.com - DNS resolves to Netlify CDN
- User accesses
-
Frontend Delivery
- Netlify serves static assets from nearest edge location
- Assets cached globally for fast delivery
- User interacts with frontend application
-
API Communication
- Frontend makes HTTPS requests to
https://api.yourdomain.com - Requests routed to AWS backend
- Backend processes request
- Frontend makes HTTPS requests to
-
Data Operations
- Backend queries database (if needed)
- Backend processes business logic
- Response sent back to frontend
-
Response to User
- Frontend receives data
- UI updates
- User sees results
Choose one of the following deployment options based on your requirements:
| Criteria | EC2 | Elastic Beanstalk | Lambda | ECS Fargate |
|---|---|---|---|---|
| Best For | Full control | Easy management | Event-driven | Containers |
| Complexity | High | Medium | Low-Medium | High |
| Scalability | Manual/Auto | Auto | Automatic | Auto |
| Cost (Low Traffic) | $$$ | $$$ | $ | $$$ |
| Cost (High Traffic) | $$ | $$ | $$$ | $$ |
| Setup Time | 2-4 hours | 1-2 hours | 1-2 hours | 3-5 hours |
| Maintenance | High | Low | Minimal | Medium |
| Cold Starts | None | None | Yes | Minimal |
| Request Timeout | Unlimited | Unlimited | 15 min | Unlimited |
| Language Support | All | Most | All | All |
| Learning Curve | Medium | Low | Medium | High |
Recommendations:
-
Choose EC2 if:
- You need full control over server
- Running long-running processes
- Need custom software/configurations
- Budget allows dedicated server
- Want to minimize costs for high traffic
-
Choose Elastic Beanstalk if:
- You want easy deployment
- Need auto-scaling without complexity
- Using supported platforms (Node, Python, etc.)
- Want AWS to manage infrastructure
- Team lacks DevOps expertise
-
Choose Lambda if:
- Building API with sporadic traffic
- Want pay-per-request pricing
- Need automatic scaling
- Requests complete in < 15 minutes
- Building microservices
-
Choose ECS Fargate if:
- Already using Docker
- Need container orchestration
- Want serverless containers
- Running microservices
- Need complex deployment requirements
Best for: Full control, predictable traffic, custom configurations
- Log in to AWS Console
- Select your preferred region (top-right corner)
- Recommendation: Use region closest to your users
- Popular:
us-east-1(N. Virginia),eu-west-1(Ireland),ap-south-1(Mumbai)
- Navigate to EC2 service (search bar or Services menu)
Click "Launch Instance" button and configure:
1. Name and Tags:
Name: my-backend-server
Tags (Optional):
Environment: production
Application: my-app-backend
ManagedBy: manual
2. Application and OS Images (AMI):
Choose operating system:
-
Ubuntu Server 22.04 LTS (Recommended for most)
- Free tier eligible
- Large community support
- Easy package management
-
Amazon Linux 2023 (AWS-optimized)
- Optimized for AWS
- Pre-installed AWS tools
- Long-term support
-
Other Options:
- Debian 11/12
- CentOS Stream
- Red Hat Enterprise Linux
- Windows Server (for .NET apps)
3. Instance Type:
| Type | vCPUs | RAM | Use Case | Monthly Cost* |
|---|---|---|---|---|
t2.micro |
1 | 1 GB | Free tier, dev/test | $0 (first year) then ~$8 |
t3.micro |
2 | 1 GB | Small apps, low traffic | ~$7 |
t3.small |
2 | 2 GB | Production (small) | ~$15 |
t3.medium |
2 | 4 GB | Production (medium) | ~$30 |
t3.large |
2 | 8 GB | Production (high traffic) | ~$60 |
c5.large |
2 | 4 GB | CPU-intensive | ~$62 |
r5.large |
2 | 16 GB | Memory-intensive | ~$96 |
*Prices approximate for us-east-1 region
Recommendation: Start with t3.small for production, can upgrade later
4. Key Pair (Login):
- Click "Create new key pair"
- Name:
my-backend-key - Key pair type: RSA
- Private key file format:
.pem(for Linux/Mac) or.ppk(for Windows PuTTY) - Click "Create key pair" - Downloads automatically
⚠️ CRITICAL: Save this file securely! Cannot download again
On Linux/Mac, secure the key:
chmod 400 ~/Downloads/my-backend-key.pem
mv ~/Downloads/my-backend-key.pem ~/.ssh/5. Network Settings:
Click "Edit" to customize:
VPC: Default (or create new)
Subnet: No preference (or choose specific)
Auto-assign public IP: Enable
Security Group:
Name: my-backend-sg
Description: Security group for my backend server
Inbound Rules:
1. SSH
- Type: SSH
- Protocol: TCP
- Port: 22
- Source: My IP (your current IP)
- Description: SSH access from my location
2. HTTP
- Type: HTTP
- Protocol: TCP
- Port: 80
- Source: 0.0.0.0/0, ::/0
- Description: Public HTTP access
3. HTTPS
- Type: HTTPS
- Protocol: TCP
- Port: 443
- Source: 0.0.0.0/0, ::/0
- Description: Public HTTPS access
4. Custom Application Port (if needed)
- Type: Custom TCP
- Protocol: TCP
- Port: 3000 (or your app port)
- Source: 0.0.0.0/0 or My IP (for testing)
- Description: Application port
- Restrict SSH to your IP only
- Change SSH port from 22 (optional security measure)
- Never use 0.0.0.0/0 for SSH in production
- Use VPN for SSH access in production environments
6. Configure Storage:
Volume 1 (Root):
Size: 20-30 GB (minimum)
Volume Type: gp3 (General Purpose SSD)
IOPS: 3000 (default)
Throughput: 125 MB/s (default)
Delete on Termination: Yes (for dev), No (for production)
Encrypted: Yes (recommended for production)
Storage Guidelines:
- Development: 20 GB sufficient
- Production: 30-50 GB recommended
- Database on same server: Add 100+ GB
- File uploads: Consider separate EBS volume
7. Advanced Details (Optional but Recommended):
User Data (Bootstrap script - runs on first launch):
For Ubuntu with Node.js backend:
#!/bin/bash
# Update system
apt update && apt upgrade -y
# Install Node.js 20.x
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt install -y nodejs
# Install essential tools
apt install -y git nginx certbot python3-certbot-nginx
# Install PM2 globally
npm install -g pm2
# Create application directory
mkdir -p /var/www/backend
chown ubuntu:ubuntu /var/www/backend
# Configure firewall
ufw allow OpenSSH
ufw allow 'Nginx Full'
ufw --force enable
echo "Setup complete!" > /var/log/user-data.logFor Python backend:
#!/bin/bash
apt update && apt upgrade -y
apt install -y python3 python3-pip python3-venv git nginx
pip3 install gunicorn
mkdir -p /var/www/backend
chown ubuntu:ubuntu /var/www/backendIAM Instance Profile (for AWS service access):
- Create IAM role with policies:
AmazonS3ReadOnlyAccess(if accessing S3)CloudWatchAgentServerPolicy(for monitoring)AmazonSSMManagedInstanceCore(for Systems Manager)
8. Summary:
Review all settings, then click "Launch Instance"
Wait 2-5 minutes for instance to start. Status should show "Running" with 2/2 status checks passed.
Elastic IP ensures your backend URL doesn't change when instance stops/restarts.
- In EC2 Console, go to "Elastic IPs" (left sidebar under Network & Security)
- Click "Allocate Elastic IP address"
- Settings:
Network Border Group: Use default Public IPv4 address pool: Amazon's pool of IPv4 addresses Tags (Optional): Name: my-backend-eip Environment: production - Click "Allocate"
- Select the new Elastic IP
- Actions → "Associate Elastic IP address"
- Settings:
Resource type: Instance Instance: Select your instance (my-backend-server) Private IP address: (Auto-selected) - Click "Associate"
Note: Elastic IPs are free while associated with a running instance. If instance is stopped, you're charged ~$0.005/hour.
Record your Elastic IP (e.g., 54.123.45.67) - this is your backend URL.
Method 1: SSH (Linux/Mac)
# Connect using your key file
ssh -i ~/.ssh/my-backend-key.pem ubuntu@YOUR_ELASTIC_IP
# Example:
ssh -i ~/.ssh/my-backend-key.pem [email protected]Method 2: SSH (Windows - PowerShell)
# Connect using key file
ssh -i C:\Users\YourName\.ssh\my-backend-key.pem ubuntu@YOUR_ELASTIC_IPMethod 3: PuTTY (Windows)
- Convert
.pemto.ppkusing PuTTYgen - Open PuTTY
- Enter host:
ubuntu@YOUR_ELASTIC_IP - Connection → SSH → Auth → Browse for
.ppkfile - Click "Open"
Method 4: EC2 Instance Connect (Browser-based)
- In EC2 Console, select your instance
- Click "Connect" button
- Choose "EC2 Instance Connect" tab
- Click "Connect"
- Browser terminal opens
Default Usernames by AMI:
- Ubuntu:
ubuntu - Amazon Linux:
ec2-user - CentOS:
centos - Debian:
admin - RHEL:
ec2-user
Upon first connection:
# You'll see warning about host authenticity
The authenticity of host 'X.X.X.X (X.X.X.X)' can't be established.
ECDSA key fingerprint is SHA256:xxxxx.
Are you sure you want to continue connecting (yes/no)?
# Type: yes
# You're now connected!
ubuntu@ip-172-31-XX-XX:~$# Update package lists
sudo apt update
# Upgrade installed packages
sudo apt upgrade -y
# Install essential build tools
sudo apt install -y build-essential curl wget git unzipFor Node.js Backend:
# Install Node.js 20.x (LTS)
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs
# Verify installation
node --version # Should show v20.x.x
npm --version # Should show 10.x.x
# Install Yarn (optional)
sudo npm install -g yarn
# Install PM2 (Process Manager)
sudo npm install -g pm2
pm2 --versionFor Python Backend:
# Install Python 3.11
sudo apt install -y python3.11 python3.11-venv python3-pip
# Verify installation
python3 --version
pip3 --version
# Install virtualenv
sudo pip3 install virtualenv
# Install production server
sudo pip3 install gunicorn
gunicorn --versionFor Go Backend:
# Download and install Go
cd /tmp
wget https://go.dev/dl/go1.21.5.linux-amd64.tar.gz
sudo rm -rf /usr/local/go
sudo tar -C /usr/local -xzf go1.21.5.linux-amd64.tar.gz
# Add to PATH
echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bashrc
source ~/.bashrc
# Verify
go versionFor Java Backend:
# Install Java JDK 17
sudo apt install -y openjdk-17-jdk
# Verify
java -version
javac -versionFor PHP Backend:
# Install PHP 8.2
sudo apt install -y php8.2 php8.2-fpm php8.2-cli php8.2-common php8.2-mysql php8.2-zip php8.2-gd php8.2-mbstring php8.2-curl php8.2-xml php8.2-bcmath
# Verify
php -v
# Install Composer
curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer
composer --version# Install Nginx
sudo apt install -y nginx
# Verify installation
nginx -v
# Start Nginx
sudo systemctl start nginx
sudo systemctl enable nginx
# Check status
sudo systemctl status nginxTest: Visit http://YOUR_ELASTIC_IP in browser - you should see Nginx welcome page.
# Install Certbot (Let's Encrypt)
sudo apt install -y certbot python3-certbot-nginx
# Verify installation
certbot --version# Check firewall status
sudo ufw status
# Allow essential services
sudo ufw allow OpenSSH
sudo ufw allow 'Nginx Full'
# Enable firewall
sudo ufw enable
# Verify rules
sudo ufw status verboseOutput should show:
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx Full ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx Full (v6) ALLOW Anywhere (v6)
# Create directory structure
sudo mkdir -p /var/www/backend
sudo chown -R $USER:$USER /var/www/backend
cd /var/www/backend# Clone your repository
git clone https://github.com/your-username/your-backend-repo.git .
# Or if using private repository, set up SSH key first:
ssh-keygen -t ed25519 -C "[email protected]"
cat ~/.ssh/id_ed25519.pub # Add this to GitHub SSH keys
# Then clone
git clone [email protected]:your-username/your-backend-repo.git .From your local machine:
# Upload using SCP
scp -i ~/.ssh/my-backend-key.pem -r /path/to/backend ubuntu@YOUR_ELASTIC_IP:/var/www/backend
# Or use SFTP
sftp -i ~/.ssh/my-backend-key.pem ubuntu@YOUR_ELASTIC_IP
sftp> cd /var/www/backend
sftp> put -r /path/to/backend/*
sftp> exitNode.js:
cd /var/www/backend
# Install production dependencies
npm install --production
# Or if you need dev dependencies for build:
npm install
npm run build # If using TypeScript or build step
npm prune --production # Remove dev dependencies after build
# For TypeScript projects:
npm install -g typescript
tsc # Compile TypeScriptPython:
cd /var/www/backend
# Create virtual environment
python3 -m venv venv
# Activate virtual environment
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Deactivate (when done)
deactivateGo:
cd /var/www/backend
# Install dependencies
go mod download
# Build binary
go build -o app main.go
# Or build optimized:
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .# Navigate to app directory
cd /var/www/backend
# Create .env file
nano .envAdd your environment variables (example):
# Application
NODE_ENV=production
PORT=3000
APP_NAME=MyBackendAPI
APP_VERSION=1.0.0
# Database
DB_HOST=your-rds-endpoint.amazonaws.com
DB_PORT=5432
DB_NAME=myapp_prod
DB_USERNAME=dbadmin
DB_PASSWORD=your_secure_password_here
# Redis/Cache
REDIS_HOST=your-redis-endpoint.amazonaws.com
REDIS_PORT=6379
REDIS_PASSWORD=your_redis_password
# Authentication
JWT_SECRET=your_super_secret_jwt_key_here_min_32_chars
JWT_EXPIRES_IN=7d
SESSION_SECRET=your_session_secret_here
# Email Service
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=[email protected]
SMTP_PASSWORD=your_app_password
EMAIL_FROM=[email protected]
# AWS Services
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=your_secret_key
S3_BUCKET=my-app-uploads
# Frontend URL (for CORS)
FRONTEND_URL=https://yourdomain.com
CORS_ORIGINS=https://yourdomain.com,https://www.yourdomain.com
# API Configuration
API_RATE_LIMIT=100
API_TIMEOUT=30000
# Logging
LOG_LEVEL=info
LOG_FILE=/var/log/backend/app.log
# Third-party APIs
STRIPE_SECRET_KEY=sk_live_...
STRIPE_WEBHOOK_SECRET=whsec_...
GOOGLE_CLIENT_ID=your_google_client_id
GOOGLE_CLIENT_SECRET=your_google_client_secretSave file (Ctrl+X, Y, Enter)
# Set proper permissions
chmod 600 .env
# Ensure only your user can read
ls -la .env
# Should show: -rw------- 1 ubuntu ubuntuFor multiple environments:
# Create environment-specific files
.env.production
.env.staging
.env.development
# Load appropriate file in your app
# Node.js example using dotenv:
require('dotenv').config({ path: `.env.${process.env.NODE_ENV}` });Create PM2 Ecosystem File:
cd /var/www/backend
nano ecosystem.config.jsAdd configuration:
module.exports = {
apps: [{
name: 'backend-api',
script: 'dist/main.js', // Or 'server.js', 'app.js', etc.
instances: 2, // Number of instances (or 'max' for all CPU cores)
exec_mode: 'cluster', // 'cluster' or 'fork'
max_memory_restart: '500M',
env: {
NODE_ENV: 'production',
PORT: 3000
},
error_file: '/var/log/backend/error.log',
out_file: '/var/log/backend/out.log',
log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
merge_logs: true,
autorestart: true,
watch: false,
max_restarts: 10,
min_uptime: '10s'
}]
};Create log directory:
sudo mkdir -p /var/log/backend
sudo chown ubuntu:ubuntu /var/log/backendStart application with PM2:
# Start using ecosystem file
pm2 start ecosystem.config.js
# Or start directly
pm2 start dist/main.js --name backend-api -i 2
# View status
pm2 status
# View logs
pm2 logs backend-api
# Monitor
pm2 monitSetup PM2 Startup:
# Generate startup script
pm2 startup systemd
# This outputs a command - copy and run it
# Example: sudo env PATH=$PATH:/usr/bin pm2 startup systemd -u ubuntu --hp /home/ubuntu
# Save current PM2 process list
pm2 save
# Verify auto-start
sudo systemctl status pm2-ubuntuUseful PM2 Commands:
# Restart app
pm2 restart backend-api
# Stop app
pm2 stop backend-api
# Delete app from PM2
pm2 delete backend-api
# Reload (zero-downtime restart)
pm2 reload backend-api
# View detailed info
pm2 show backend-api
# Clear logs
pm2 flushCreate systemd service file:
sudo nano /etc/systemd/system/backend.serviceFor Python (Gunicorn):
[Unit]
Description=Backend API Server
After=network.target
[Service]
Type=notify
User=ubuntu
Group=ubuntu
WorkingDirectory=/var/www/backend
Environment="PATH=/var/www/backend/venv/bin"
EnvironmentFile=/var/www/backend/.env
ExecStart=/var/www/backend/venv/bin/gunicorn \
--workers 4 \
--bind 127.0.0.1:3000 \
--timeout 120 \
--access-logfile /var/log/backend/access.log \
--error-logfile /var/log/backend/error.log \
wsgi:app
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.targetFor Go:
[Unit]
Description=Backend API Server
After=network.target
[Service]
Type=simple
User=ubuntu
Group=ubuntu
WorkingDirectory=/var/www/backend
EnvironmentFile=/var/www/backend/.env
ExecStart=/var/www/backend/app
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.targetEnable and start service:
# Reload systemd
sudo systemctl daemon-reload
# Enable service (start on boot)
sudo systemctl enable backend
# Start service
sudo systemctl start backend
# Check status
sudo systemctl status backend
# View logs
sudo journalctl -u backend -fUseful systemd commands:
# Restart service
sudo systemctl restart backend
# Stop service
sudo systemctl stop backend
# View logs (last 100 lines)
sudo journalctl -u backend -n 100
# View logs (follow)
sudo journalctl -u backend -f
# Clear old logs
sudo journalctl --vacuum-time=7d# Create configuration file
sudo nano /etc/nginx/sites-available/backendBasic Configuration:
# Upstream backend servers
upstream backend_servers {
least_conn; # Load balancing method
server 127.0.0.1:3000 max_fails=3 fail_timeout=30s;
# Add more servers if running multiple instances:
# server 127.0.0.1:3001 max_fails=3 fail_timeout=30s;
keepalive 32;
}
# HTTP Server (Port 80)
server {
listen 80;
listen [::]:80;
server_name api.yourdomain.com; # Replace with your domain
# Redirect all HTTP to HTTPS
return 301 https://$server_name$request_uri;
}
# HTTPS Server (Port 443) - Will be configured after SSL setup
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name api.yourdomain.com; # Replace with your domain
# SSL certificates (will be added by Certbot)
# ssl_certificate /etc/letsencrypt/live/api.yourdomain.com/fullchain.pem;
# ssl_certificate_key /etc/letsencrypt/live/api.yourdomain.com/privkey.pem;
# SSL configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
# Client body size limit (for file uploads)
client_max_body_size 10M;
# Logging
access_log /var/log/nginx/backend_access.log;
error_log /var/log/nginx/backend_error.log;
# Root location - proxy to backend
location / {
# Proxy headers
proxy_pass http://backend_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffering
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
# Cache bypass
proxy_cache_bypass $http_upgrade;
}
# Health check endpoint (no logging)
location /health {
proxy_pass http://backend_servers/health;
access_log off;
}
# WebSocket support (if needed)
location /ws {
proxy_pass http://backend_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 86400; # 24 hours
}
# Static files (if serving from backend)
location /static/ {
alias /var/www/backend/static/;
expires 30d;
add_header Cache-Control "public, immutable";
}
# Deny access to sensitive files
location ~ /\.env {
deny all;
return 404;
}
}For IP-based access (development/testing):
server {
listen 80;
server_name YOUR_ELASTIC_IP; # Or underscore _ for any IP
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}# Create symbolic link to enable site
sudo ln -s /etc/nginx/sites-available/backend /etc/nginx/sites-enabled/
# Remove default site (optional)
sudo rm /etc/nginx/sites-enabled/default
# Test Nginx configuration
sudo nginx -t
# If test passes, reload Nginx
sudo systemctl reload nginx
# Check Nginx status
sudo systemctl status nginxExpected output from nginx -t:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Edit main Nginx config:
sudo nano /etc/nginx/nginx.confOptimize these settings:
user www-data;
worker_processes auto; # Auto-detect CPU cores
worker_rlimit_nofile 65535;
events {
worker_connections 2048;
use epoll;
multi_accept on;
}
http {
# Basic settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off; # Hide Nginx version
# Buffer sizes
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 10M;
large_client_header_buffers 2 1k;
# Timeouts
client_body_timeout 12;
client_header_timeout 12;
send_timeout 10;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml text/javascript
application/json application/javascript application/xml+rss
application/rss+xml font/truetype font/opentype
application/vnd.ms-fontobject image/svg+xml;
gzip_disable "msie6";
# Rate limiting (optional)
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_status 429;
# Include site configurations
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}Test and reload:
sudo nginx -t
sudo systemctl reload nginxBefore obtaining SSL certificate:
- Domain DNS must be configured (see Domain Configuration section)
- Nginx must be running and serving site on port 80
- Firewall must allow ports 80 and 443
Verify domain resolves:
nslookup api.yourdomain.com
# Should return your Elastic IP
ping api.yourdomain.com
# Should ping your serverAutomatic method (recommended):
# Run Certbot with Nginx plugin
sudo certbot --nginx -d api.yourdomain.com
# For multiple domains/subdomains:
sudo certbot --nginx -d api.yourdomain.com -d www.api.yourdomain.comFollow the prompts:
Enter email address (for urgent renewal and security notices): [email protected]
Agree to Terms of Service: Yes (A)
Share email with EFF: No (N)
Redirect HTTP to HTTPS: Yes (2) # Recommended
Certbot will:
- Obtain certificate from Let's Encrypt
- Modify your Nginx configuration
- Enable HTTPS
- Set up automatic renewal
Manual method:
# Obtain certificate only (no auto-configuration)
sudo certbot certonly --nginx -d api.yourdomain.com
# Certificate files will be saved to:
# /etc/letsencrypt/live/api.yourdomain.com/fullchain.pem
# /etc/letsencrypt/live/api.yourdomain.com/privkey.pemThen manually update Nginx config with SSL settings (already in template above).
# Check certificate details
sudo certbot certificates
# Test SSL configuration
curl https://api.yourdomain.com/health
# Test from browser or SSL checker
# https://www.ssllabs.com/ssltest/Certbot installs a systemd timer for automatic renewal:
# Check renewal timer status
sudo systemctl status certbot.timer
# Test renewal process (dry run)
sudo certbot renew --dry-run
# Manual renewal (if needed)
sudo certbot renew
# View renewal logs
sudo cat /var/log/letsencrypt/letsencrypt.logCertificates auto-renew when they have 30 days or less before expiration.
# Test from server
curl http://localhost:3000/health
curl http://127.0.0.1:3000/health
# Test through Nginx
curl http://YOUR_ELASTIC_IP/health
# Test with HTTPS (if SSL configured)
curl https://api.yourdomain.com/healthFrom your local machine:
# HTTP (should redirect to HTTPS)
curl http://api.yourdomain.com/health
# HTTPS
curl https://api.yourdomain.com/health
# Test specific endpoint
curl https://api.yourdomain.com/api/users
# POST request
curl -X POST https://api.yourdomain.com/api/login \
-H "Content-Type: application/json" \
-d '{"email":"[email protected]","password":"password"}'# PM2 logs (Node.js)
pm2 logs backend-api
# Systemd logs (Python/Go)
sudo journalctl -u backend -f
# Nginx access logs
sudo tail -f /var/log/nginx/backend_access.log
# Nginx error logs
sudo tail -f /var/log/nginx/backend_error.log
# Application logs (if using file logging)
tail -f /var/log/backend/app.log# CPU and Memory
htop # or top
# Disk usage
df -h
# Network connections
sudo netstat -tuln
# Process info (Node.js)
pm2 status
pm2 monit
# Process info (systemd)
sudo systemctl status backendBest for: Easy deployment, automatic scaling, managed infrastructure
On Linux/Mac:
# Using pip
pip3 install awsebcli --upgrade --user
# Add to PATH (add to ~/.bashrc or ~/.zshrc)
export PATH=$PATH:~/.local/bin
# Reload shell
source ~/.bashrc # or source ~/.zshrc
# Verify installation
eb --versionOn Windows:
# Using pip
pip install awsebcli --upgrade --user
# Add to PATH (via System Environment Variables)
# or use full path when running eb commands
# Verify installation
eb --version# Configure AWS CLI
aws configure
# Enter:
# AWS Access Key ID: AKIA...
# AWS Secret Access Key: ...
# Default region: us-east-1
# Default output format: json
# Verify configuration
aws sts get-caller-identitycd /path/to/your/backendFor Node.js:
Create .ebextensions/01-node-settings.config:
option_settings:
aws:elasticbeanstalk:container:nodejs:
NodeCommand: "npm start"
NodeVersion: 20.10.0
aws:elasticbeanstalk:application:environment:
NODE_ENV: production
NPM_USE_PRODUCTION: trueFor Python:
Create .ebextensions/01-python-settings.config:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: application:app
aws:elasticbeanstalk:application:environment:
PYTHONPATH: "/var/app/current:$PYTHONPATH"For Docker:
Create Dockerfile and Dockerrun.aws.json
Create .ebextensions/02-autoscaling.config:
option_settings:
aws:autoscaling:asg:
MinSize: 1
MaxSize: 4
Cooldown: 360
aws:autoscaling:trigger:
MeasureName: CPUUtilization
Statistic: Average
Unit: Percent
UpperThreshold: 70
UpperBreachScaleIncrement: 1
LowerThreshold: 30
LowerBreachScaleIncrement: -1
BreachDuration: 5
Period: 5Create .ebextensions/03-loadbalancer.config:
option_settings:
aws:elasticbeanstalk:environment:
EnvironmentType: LoadBalanced
LoadBalancerType: application
aws:elbv2:listener:default:
ListenerEnabled: true
Protocol: HTTP
aws:elbv2:listener:443:
ListenerEnabled: true
Protocol: HTTPS
SSLCertificateArns: arn:aws:acm:us-east-1:123456789012:certificate/xxxxx
SSLPolicy: ELBSecurityPolicy-2016-08# Initialize EB in your project directory
eb init
# Follow prompts:
# Select region: Choose your preferred region (e.g., us-east-1)
# Select application: Create new Application
# Application name: my-backend-api
# Platform: Choose your platform (Node.js, Python, etc.)
# Platform version: Latest recommended version
# SSH: Yes
# Key pair: Select existing or create newThis creates .elasticbeanstalk/config.yml:
branch-defaults:
main:
environment: my-backend-prod
global:
application_name: my-backend-api
default_ec2_keyname: my-backend-key
default_platform: Node.js 20 running on 64bit Amazon Linux 2023
default_region: us-east-1
sc: git# Set environment variables for EB environment
eb setenv \
NODE_ENV=production \
DB_HOST=your-db-host \
DB_PORT=5432 \
DB_NAME=myapp \
DB_USERNAME=dbuser \
DB_PASSWORD=your_password \
JWT_SECRET=your_jwt_secret \
FRONTEND_URL=https://yourdomain.com \
AWS_REGION=us-east-1Or create .ebextensions/04-environment.config:
option_settings:
aws:elasticbeanstalk:application:environment:
NODE_ENV: production
DB_HOST: your-db-host
DB_PORT: 5432
DB_NAME: myapp
FRONTEND_URL: https://yourdomain.com# Create production environment
eb create my-backend-prod \
--instance-type t3.small \
--min-instances 1 \
--max-instances 4 \
--envvars NODE_ENV=production
# This will:
# - Create EC2 instances
# - Setup load balancer
# - Configure security groups
# - Deploy your application
# - Provide a URL: my-backend-prod.eba-xxxxx.us-east-1.elasticbeanstalk.comWait 5-10 minutes for environment creation. Monitor progress:
# Check environment status
eb status
# View events
eb events -f
# View logs
eb logs# Deploy current code
eb deploy
# Deploy specific environment
eb deploy my-backend-prod
# Deploy and open in browser
eb deploy && eb open# Request certificate via AWS Console or CLI
aws acm request-certificate \
--domain-name api.yourdomain.com \
--validation-method DNS \
--region us-east-1
# Note the CertificateArn from output- Go to AWS Console → Certificate Manager
- Click on your certificate
- Click "Create records in Route 53" (or manually add DNS records)
- Wait for validation (5-30 minutes)
# Add HTTPS listener with SSL certificate
eb setenv SSL_CERTIFICATE_ARN=arn:aws:acm:us-east-1:123456789012:certificate/xxxxxOr update .ebextensions/03-loadbalancer.config with certificate ARN.
Redeploy:
eb deploy- Get load balancer DNS name:
eb status
# Or via AWS Console: Elastic Beanstalk → Environment → Configuration → Load balancer- Add CNAME record in your DNS:
Type: CNAME
Name: api
Value: my-backend-prod.eba-xxxxx.us-east-1.elasticbeanstalk.com
TTL: 300
Or use Route 53 Alias record (recommended).
# Environment management
eb list # List all environments
eb status # Show environment status
eb health # Show environment health
eb open # Open environment in browser
# Deployment
eb deploy # Deploy application
eb deploy --staged # Deploy only staged changes
# Logs and monitoring
eb logs # Fetch logs
eb logs --stream # Stream logs in real-time
eb ssh # SSH into instance
eb events # View recent events
# Configuration
eb config # Edit environment configuration
eb setenv KEY=VALUE # Set environment variable
eb printenv # Print environment variables
# Scaling
eb scale 3 # Set number of instances to 3
# Termination
eb terminate # Terminate environmentBest for: Event-driven APIs, sporadic traffic, pay-per-request pricing
# Install Serverless Framework globally
npm install -g serverless
# Verify installation
serverless --version
# Alternative: Use npx (no global install)
npx serverless --version# Configure Serverless with AWS credentials
serverless config credentials \
--provider aws \
--key AKIA... \
--secret YOUR_SECRET_KEY \
--profile serverless
# Or export environment variables
export AWS_ACCESS_KEY_ID=AKIA...
export AWS_SECRET_ACCESS_KEY=YOUR_SECRET_KEYFor Node.js (Express/NestJS):
Your project should adapt to Lambda handler format:
backend/
├── src/
│ ├── handlers/ # Lambda handler functions
│ │ └── api.js
│ ├── app.js # Express/NestJS app
│ └── ...
├── serverless.yml # Serverless configuration
├── package.json
└── .env
cd /path/to/backend
# Install Serverless plugins
npm install --save-dev serverless-offline serverless-dotenv-plugin
# Install AWS Lambda adapter
npm install aws-serverless-express # For Express
# Or
npm install @vendia/serverless-express # Newer forkCreate src/handlers/api.js:
const serverless = require('@vendia/serverless-express');
const app = require('../app'); // Your Express app
// Configure handler
let serverlessHandler;
async function setup() {
if (!serverlessHandler) {
serverlessHandler = serverless({ app });
}
return serverlessHandler;
}
// Lambda handler
exports.handler = async (event, context) => {
const handler = await setup();
return handler(event, context);
};Modify src/app.js to export app without listening:
const express = require('express');
const app = express();
// Your middleware and routes
app.use(express.json());
app.get('/health', (req, res) => res.json({ status: 'ok' }));
// ... other routes
// Export app without listening
module.exports = app;
// Only listen if not in Lambda
if (require.main === module) {
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
}Create src/lambda.ts:
import { NestFactory } from '@nestjs/core';
import { ExpressAdapter } from '@nestjs/platform-express';
import { AppModule } from './app.module';
import * as express from 'express';
import { Handler, Context } from 'aws-lambda';
import * as serverlessExpress from '@vendia/serverless-express';
let cachedServer: Handler;
async function bootstrap() {
if (!cachedServer) {
const expressApp = express();
const app = await NestFactory.create(
AppModule,
new ExpressAdapter(expressApp),
);
// Enable CORS
app.enableCors({
origin: process.env.FRONTEND_URL || '*',
credentials: true,
});
await app.init();
cachedServer = serverlessExpress({ app: expressApp });
}
return cachedServer;
}
export const handler: Handler = async (
event: any,
context: Context,
) => {
const server = await bootstrap();
return server(event, context);
};Update tsconfig.json to output to dist/ folder.
Create serverless.yml in project root:
service: my-backend-api
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs20.x
region: us-east-1
stage: ${opt:stage, 'prod'}
# Memory and timeout
memorySize: 512
timeout: 30
# Environment variables
environment:
NODE_ENV: production
STAGE: ${self:provider.stage}
DB_HOST: ${env:DB_HOST}
DB_PORT: ${env:DB_PORT}
DB_NAME: ${env:DB_NAME}
DB_USERNAME: ${env:DB_USERNAME}
DB_PASSWORD: ${env:DB_PASSWORD}
JWT_SECRET: ${env:JWT_SECRET}
FRONTEND_URL: ${env:FRONTEND_URL}
# IAM permissions
iam:
role:
statements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:${self:provider.region}:*:table/*"
- Effect: Allow
Action:
- s3:GetObject
- s3:PutObject
- s3:DeleteObject
Resource: "arn:aws:s3:::my-bucket/*"
- Effect: Allow
Action:
- ses:SendEmail
- ses:SendRawEmail
Resource: "*"
# VPC configuration (if accessing RDS in VPC)
# vpc:
# securityGroupIds:
# - sg-xxxxx
# subnetIds:
# - subnet-xxxxx
# - subnet-yyyyy
functions:
api:
handler: dist/handlers/api.handler # For JavaScript
# handler: dist/lambda.handler # For TypeScript NestJS
events:
- http:
path: /{proxy+}
method: ANY
cors:
origin: '*'
headers:
- Content-Type
- Authorization
allowCredentials: true
- http:
path: /
method: ANY
cors:
origin: '*'
headers:
- Content-Type
- Authorization
# Separate functions (alternative approach)
# getUsers:
# handler: dist/handlers/users.getAll
# events:
# - http:
# path: /users
# method: GET
# createUser:
# handler: dist/handlers/users.create
# events:
# - http:
# path: /users
# method: POST
plugins:
- serverless-offline
- serverless-dotenv-plugin
# Package configuration
package:
individually: false
exclude:
- .git/**
- .github/**
- .vscode/**
- test/**
- coverage/**
- '*.md'
- .env*
include:
- dist/**
- node_modules/**
# Custom configuration
custom:
serverless-offline:
httpPort: 3000
noPrependStageInUrl: true
# Warm-up plugin (prevent cold starts)
# warmup:
# default:
# enabled: true
# events:
# - schedule: rate(5 minutes)JavaScript:
# No build needed for JavaScriptTypeScript:
# Build TypeScript
npm run build
# Verify dist/ folder exists
ls dist/# Deploy to AWS
serverless deploy
# Deploy to specific stage
serverless deploy --stage prod
# Deploy only function code (faster)
serverless deploy function -f api
# Deploy with verbose output
serverless deploy --verboseOutput will show:
Service Information
service: my-backend-api
stage: prod
region: us-east-1
stack: my-backend-api-prod
endpoints:
ANY - https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod/{proxy+}
ANY - https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod/
functions:
api: my-backend-api-prod-api
Your API URL: https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod
# Test health endpoint
curl https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod/health
# Test specific endpoint
curl https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/prod/api/users
# View logs
serverless logs -f api -t# Request certificate in us-east-1 (required for API Gateway)
aws acm request-certificate \
--domain-name api.yourdomain.com \
--validation-method DNS \
--region us-east-1Via AWS Console:
- Go to API Gateway → Custom domain names
- Click Create
- Settings:
Domain name: api.yourdomain.com Certificate: Select your ACM certificate Endpoint type: Regional - Click Create domain name
- Note the API Gateway domain name (e.g.,
d-xxxxxxxxxx.execute-api.us-east-1.amazonaws.com)
Via Serverless Plugin:
Install plugin:
npm install --save-dev serverless-domain-managerAdd to serverless.yml:
plugins:
- serverless-domain-manager
custom:
customDomain:
domainName: api.yourdomain.com
certificateName: api.yourdomain.com
basePath: ''
stage: ${self:provider.stage}
createRoute53Record: true
endpointType: regionalCreate domain:
serverless create_domainDeploy:
serverless deployAdd CNAME record:
Type: CNAME
Name: api
Value: d-xxxxxxxxxx.execute-api.us-east-1.amazonaws.com
TTL: 300
Or use Route 53 Alias record.
# Deployment
serverless deploy # Deploy entire service
serverless deploy -f api # Deploy single function
serverless deploy --stage prod # Deploy to specific stage
# Information
serverless info # Show service info
serverless info --verbose # Show detailed info
# Logs
serverless logs -f api # Fetch logs
serverless logs -f api -t # Stream logs in real-time
serverless logs -f api --startTime 1h # Logs from last hour
# Invocation
serverless invoke -f api # Invoke function
serverless invoke -f api -l # Invoke and show logs
serverless invoke local -f api # Invoke locally
# Local development
serverless offline # Run locally
serverless offline --port 3000 # Run on specific port
# Environment
serverless print # Print resolved serverless.yml
# Removal
serverless remove # Remove service from AWS
serverless remove --stage dev # Remove specific stageUse Lambda Layers for dependencies:
Create layers/nodejs/package.json with heavy dependencies:
{
"dependencies": {
"aws-sdk": "^2.1400.0"
}
}Update serverless.yml:
layers:
dependencies:
path: layers
name: ${self:provider.stage}-dependencies
description: Shared dependencies
compatibleRuntimes:
- nodejs20.x
functions:
api:
handler: dist/handlers/api.handler
layers:
- { Ref: DependenciesLambdaLayer }Enable Provisioned Concurrency:
functions:
api:
handler: dist/handlers/api.handler
provisionedConcurrency: 2 # Keep 2 instances warmUse Warm-up Plugin:
npm install --save-dev serverless-plugin-warmupplugins:
- serverless-plugin-warmup
custom:
warmup:
default:
enabled: true
events:
- schedule: rate(5 minutes)
concurrency: 1functions:
api:
memorySize: 1024 # More memory = more CPU = faster
timeout: 29 # API Gateway max is 29 secondsNote: Test different memory sizes for cost/performance balance.
provider:
environment:
CACHE_TTL: 3600
LOG_LEVEL: info
functions:
api:
environment:
SPECIFIC_CONFIG: valueBest for: Docker applications, microservices, complex deployments
# Install Docker
# Download from: https://www.docker.com/
# Verify installation
docker --version
docker-compose --version
# Install AWS CLI (if not already installed)
aws --versionCreate Dockerfile in your backend root:
Node.js Example:
# Multi-stage build
FROM node:20-alpine AS builder
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Build application (if TypeScript)
RUN npm run build
# Production image
FROM node:20-alpine
# Install dumb-init (proper signal handling)
RUN apk add --no-cache dumb-init
# Create non-root user
RUN addgroup -g 1001 -S nodejs && adduser -S nodejs -u 1001
# Set working directory
WORKDIR /app
# Copy node_modules from builder
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
# Copy built application from builder
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/package*.json ./
# Switch to non-root user
USER nodejs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s \
CMD node -e "require('http').get('http://localhost:3000/health', (r) => { process.exit(r.statusCode === 200 ? 0 : 1) })"
# Start application
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/main.js"]Python Example:
FROM python:3.11-slim
# Set environment variables
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_NO_CACHE_DIR=1
# Create non-root user
RUN useradd -m -u 1001 appuser
# Set working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
postgresql-client \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements
COPY requirements.txt .
# Install Python dependencies
RUN pip install --upgrade pip && \
pip install -r requirements.txt
# Copy application code
COPY --chown=appuser:appuser . .
# Switch to non-root user
USER appuser
# Expose port
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s \
CMD python -c "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')"
# Start application
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "4", "app:app"]Create .dockerignore:
node_modules
npm-debug.log
dist
.git
.gitignore
.env
.env.*
README.md
.vscode
.idea
coverage
test
*.test.js
*.spec.ts
.DS_Store
# Build image
docker build -t my-backend:latest .
# Run container
docker run -p 3000:3000 \
-e NODE_ENV=production \
-e DB_HOST=localhost \
--name backend-test \
my-backend:latest
# Test endpoint
curl http://localhost:3000/health
# View logs
docker logs -f backend-test
# Stop container
docker stop backend-test
docker rm backend-test# Create ECR repository
aws ecr create-repository \
--repository-name my-backend \
--region us-east-1
# Output will include repository URI:
# 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-backend# Get ECR login password
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin \
123456789012.dkr.ecr.us-east-1.amazonaws.com
# Tag image
docker tag my-backend:latest \
123456789012.dkr.ecr.us-east-1.amazonaws.com/my-backend:latest
# Push image
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-backend:latestVia AWS Console:
- Go to ECS → Clusters
- Click Create Cluster
- Settings:
Cluster name: my-backend-cluster Infrastructure: AWS Fargate (serverless) Namespace: my-backend (optional) Tags: (optional) - Click Create
Via AWS CLI:
aws ecs create-cluster \
--cluster-name my-backend-cluster \
--capacity-providers FARGATE FARGATE_SPOT \
--default-capacity-provider-strategy \
capacityProvider=FARGATE,weight=1 \
capacityProvider=FARGATE_SPOT,weight=1 \
--region us-east-1Create task-definition.json:
{
"family": "my-backend-task",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "512",
"memory": "1024",
"executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
"taskRoleArn": "arn:aws:iam::123456789012:role/ecsTaskRole",
"containerDefinitions": [
{
"name": "backend",
"image": "123456789012.dkr.ecr.us-east-1.amazonaws.com/my-backend:latest",
"cpu": 512,
"memory": 1024,
"portMappings": [
{
"containerPort": 3000,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "NODE_ENV",
"value": "production"
},
{
"name": "PORT",
"value": "3000"
}
],
"secrets": [
{
"name": "DB_PASSWORD",
"valueFrom": "arn:aws:secretsmanager:us-east-1:123456789012:secret:db-password"
},
{
"name": "JWT_SECRET",
"valueFrom": "arn:aws:secretsmanager:us-east-1:123456789012:secret:jwt-secret"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/my-backend",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "backend"
}
},
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:3000/health || exit 1"],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 60
}
}
]
}Create IAM Roles:
Task Execution Role (allows ECS to pull images, write logs):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents",
"secretsmanager:GetSecretValue"
],
"Resource": "*"
}
]
}Task Role (allows container to access AWS services):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": "arn:aws:dynamodb:us-east-1:*:table/*"
}
]
}Register task definition:
# Create CloudWatch log group
aws logs create-log-group --log-group-name /ecs/my-backend
# Register task definition
aws ecs register-task-definition \
--cli-input-json file://task-definition.jsonVia AWS Console:
- Go to EC2 → Load Balancers
- Click Create Load Balancer → Application Load Balancer
- Settings:
Name: my-backend-alb Scheme: Internet-facing IP address type: IPv4 VPC: Default (or your VPC) Availability Zones: Select at least 2 Security Group: Create new (allow HTTP 80, HTTPS 443) - Listeners:
- HTTP:80 → Redirect to HTTPS
- HTTPS:443 → Forward to target group
- SSL Certificate: Select from ACM
- Click Create
Create Target Group:
Name: my-backend-tg
Target type: IP
Protocol: HTTP
Port: 3000
VPC: (your VPC)
Health check path: /health
# Create service
aws ecs create-service \
--cluster my-backend-cluster \
--service-name my-backend-service \
--task-definition my-backend-task:1 \
--desired-count 2 \
--launch-type FARGATE \
--platform-version LATEST \
--network-configuration "awsvpcConfiguration={subnets=[subnet-xxx,subnet-yyy],securityGroups=[sg-xxx],assignPublicIp=ENABLED}" \
--load-balancers "targetGroupArn=arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/my-backend-tg/xxx,containerName=backend,containerPort=3000" \
--health-check-grace-period-seconds 60# Register scalable target
aws application-autoscaling register-scalable-target \
--service-namespace ecs \
--resource-id service/my-backend-cluster/my-backend-service \
--scalable-dimension ecs:service:DesiredCount \
--min-capacity 2 \
--max-capacity 10
# Create scaling policy (target tracking)
aws application-autoscaling put-scaling-policy \
--service-namespace ecs \
--resource-id service/my-backend-cluster/my-backend-service \
--scalable-dimension ecs:service:DesiredCount \
--policy-name cpu-scaling \
--policy-type TargetTrackingScaling \
--target-tracking-scaling-policy-configuration file://scaling-policy.jsonscaling-policy.json:
{
"TargetValue": 70.0,
"PredefinedMetricSpecification": {
"PredefinedMetricType": "ECSServiceAverageCPUUtilization"
},
"ScaleInCooldown": 300,
"ScaleOutCooldown": 60
}# Build new image
docker build -t my-backend:v2 .
# Tag and push
docker tag my-backend:v2 \
123456789012.dkr.ecr.us-east-1.amazonaws.com/my-backend:v2
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-backend:v2
# Update task definition with new image tag
# (Modify task-definition.json, change image tag to :v2)
# Register new task definition revision
aws ecs register-task-definition \
--cli-input-json file://task-definition.json
# Update service to use new task definition
aws ecs update-service \
--cluster my-backend-cluster \
--service my-backend-service \
--task-definition my-backend-task:2 \
--force-new-deployment# List clusters
aws ecs list-clusters
# List services
aws ecs list-services --cluster my-backend-cluster
# Describe service
aws ecs describe-services \
--cluster my-backend-cluster \
--services my-backend-service
# List tasks
aws ecs list-tasks \
--cluster my-backend-cluster \
--service-name my-backend-service
# Describe task
aws ecs describe-tasks \
--cluster my-backend-cluster \
--tasks arn:aws:ecs:us-east-1:123456789012:task/xxx
# View logs
aws logs tail /ecs/my-backend --follow
# Scale service
aws ecs update-service \
--cluster my-backend-cluster \
--service my-backend-service \
--desired-count 5
# Stop task (force redeployment)
aws ecs stop-task \
--cluster my-backend-cluster \
--task arn:aws:ecs:us-east-1:123456789012:task/xxxDifferent frameworks have different build requirements:
React (Create React App):
package.json:
{
"scripts": {
"build": "react-scripts build"
}
}Build output: build/ directory
React (Vite):
package.json:
{
"scripts": {
"build": "vite build"
}
}Build output: dist/ directory
Next.js (Static Export):
next.config.js:
/** @type {import('next').NextConfig} */
const nextConfig = {
output: 'export', // Enable static export
images: {
unoptimized: true, // Required for static export
},
trailingSlash: true,
reactStrictMode: true,
}
module.exports = nextConfigpackage.json:
{
"scripts": {
"build": "next build"
}
}Build output: out/ directory
Next.js (SSR with Netlify):
Install Netlify plugin:
npm install -D @netlify/plugin-nextjsnetlify.toml:
[build]
command = "npm run build"
publish = ".next"
[[plugins]]
package = "@netlify/plugin-nextjs"Vue.js (Vue CLI):
package.json:
{
"scripts": {
"build": "vue-cli-service build"
}
}Build output: dist/ directory
Nuxt.js:
nuxt.config.js:
export default {
target: 'static', // For static generation
generate: {
fallback: true
}
}package.json:
{
"scripts": {
"generate": "nuxt generate"
}
}Build command: npm run generate
Build output: dist/ directory
Angular:
package.json:
{
"scripts": {
"build": "ng build --configuration production"
}
}Build output: dist/project-name/ directory
Svelte/SvelteKit:
svelte.config.js:
import adapter from '@sveltejs/adapter-static';
export default {
kit: {
adapter: adapter({
pages: 'build',
assets: 'build',
fallback: null
})
}
};Build output: build/ directory
Gatsby:
Automatically configured for Netlify.
Build output: public/ directory
Environment Variable Approach (Recommended):
Create .env.production:
# API Backend URL
REACT_APP_API_URL=https://api.yourdomain.com
# Or for Next.js:
NEXT_PUBLIC_API_URL=https://api.yourdomain.com
# Or for Vue:
VUE_APP_API_URL=https://api.yourdomain.com
# Or for Angular (environment.prod.ts):
# apiUrl: 'https://api.yourdomain.com'- React (CRA):
REACT_APP_ - Next.js:
NEXT_PUBLIC_ - Vue:
VUE_APP_ - Vite:
VITE_
Usage in Code:
React/Next.js:
const API_URL = process.env.REACT_APP_API_URL || 'http://localhost:3000';
// or
const API_URL = process.env.NEXT_PUBLIC_API_URL || 'http://localhost:3000';
// Make API calls
fetch(`${API_URL}/api/users`)
.then(res => res.json())
.then(data => console.log(data));Vue:
const API_URL = process.env.VUE_APP_API_URL || 'http://localhost:3000';Angular (environment.prod.ts):
export const environment = {
production: true,
apiUrl: 'https://api.yourdomain.com'
};# Install dependencies
npm install
# Run build
npm run build
# Verify output directory
ls build/ # or dist/ or out/ depending on framework
# Test built site locally (optional)
npx serve build -s
# or
npx serve dist -s
# or
npx serve out -s
# Visit http://localhost:3000 and testEnsure:
- Build completes without errors
- No broken links or missing assets
- API calls work (if backend is running)
Create netlify.toml in project root:
React (CRA) / Vue / Angular:
[build]
command = "npm run build"
publish = "build" # or "dist" for Vue/Angular
[build.environment]
NODE_VERSION = "20"
NPM_VERSION = "10"
# Redirect rules for SPA
[[redirects]]
from = "/*"
to = "/index.html"
status = 200
# Security headers
[[headers]]
for = "/*"
[headers.values]
X-Frame-Options = "DENY"
X-Content-Type-Options = "nosniff"
X-XSS-Protection = "1; mode=block"
Referrer-Policy = "strict-origin-when-cross-origin"
Permissions-Policy = "camera=(), microphone=(), geolocation=()"
Strict-Transport-Security = "max-age=31536000; includeSubDomains; preload"
# Cache static assets
[[headers]]
for = "/static/*"
[headers.values]
Cache-Control = "public, max-age=31536000, immutable"
[[headers]]
for = "/*.js"
[headers.values]
Cache-Control = "public, max-age=31536000, immutable"
[[headers]]
for = "/*.css"
[headers.values]
Cache-Control = "public, max-age=31536000, immutable"Next.js (Static):
[build]
command = "npm run build"
publish = "out"
[build.environment]
NODE_VERSION = "20"
[[redirects]]
from = "/*"
to = "/index.html"
status = 200
[[headers]]
for = "/*"
[headers.values]
X-Frame-Options = "DENY"
X-Content-Type-Options = "nosniff"
X-XSS-Protection = "1; mode=block"
[[headers]]
for = "/_next/static/*"
[headers.values]
Cache-Control = "public, max-age=31536000, immutable"Next.js (SSR with plugin):
[build]
command = "npm run build"
publish = ".next"
[[plugins]]
package = "@netlify/plugin-nextjs"Gatsby:
[build]
command = "gatsby build"
publish = "public"
[build.environment]
NODE_VERSION = "20"Ensure build outputs are NOT committed:
# dependencies
node_modules/
# production build
build/
dist/
out/
.next/
public/ # For Gatsby/SvelteKit
# environment variables
.env
.env.local
.env.production.local
.env.development.local
.env.test.local
# logs
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# misc
.DS_Store
.vscode/
.idea/
# Netlify
.netlify/# Initialize git
git init
# Add remote repository
git remote add origin https://github.com/your-username/your-frontend-repo.git
# Or for SSH:
git remote add origin [email protected]:your-username/your-frontend-repo.git# Add all files
git add .
# Commit
git commit -m "feat: prepare for Netlify deployment"
# Push to main branch
git push -u origin main
# Or if using master branch:
git push -u origin master- Go to app.netlify.com
- Sign in with your Git provider (GitHub/GitLab/Bitbucket)
- Authorize Netlify to access your repositories
- Click "Add new site" → "Import an existing project"
- Select your Git provider
- Authorize Netlify (if first time)
- Select your repository from the list
- If repository not visible:
- Click "Configure Netlify on GitHub"
- Grant access to specific repository or all repositories
Site settings:
Branch to deploy: main (or master)
Build settings:
| Framework | Build Command | Publish Directory |
|---|---|---|
| React (CRA) | npm run build |
build |
| React (Vite) | npm run build |
dist |
| Next.js (Static) | npm run build |
out |
| Next.js (SSR) | npm run build |
.next |
| Vue CLI | npm run build |
dist |
| Nuxt.js | npm run generate |
dist |
| Angular | npm run build |
dist/project-name |
| Svelte | npm run build |
build |
| SvelteKit | npm run build |
build |
| Gatsby | gatsby build |
public |
| Astro | npm run build |
dist |
Advanced build settings:
Click "Show advanced":
Base directory: (leave empty unless monorepo)
Functions directory: (leave empty unless using Netlify Functions)
Environment variables:
Add environment variables (click "Add environment variable"):
REACT_APP_API_URL = https://api.yourdomain.com
# or
NEXT_PUBLIC_API_URL = https://api.yourdomain.com
# or
VUE_APP_API_URL = https://api.yourdomain.com
Build settings (alternative):
If using netlify.toml, build settings are read from file. Console settings override file settings.
- Click "Deploy site"
- Netlify will:
- Clone your repository
- Install dependencies (
npm install) - Run build command
- Deploy to CDN
- Assign temporary URL
Build process (example):
9:00:00 AM: Build ready to start
9:00:02 AM: build-image version: 12345abcde
9:00:02 AM: Fetching cached dependencies
9:00:05 AM: Installing dependencies
9:00:05 AM: Installing npm packages
9:02:30 AM: npm packages installed
9:02:31 AM: Started building the site
9:02:31 AM: Running build command: npm run build
9:04:45 AM: Build complete
9:04:46 AM: Deploying to production
9:04:58 AM: Site is live!
Wait 2-10 minutes (depending on project size).
-
Check Build Logs:
- Go to Deploys tab
- Click on latest deployment
- View build logs for any errors
-
Visit Your Site:
- Netlify assigns a random URL:
https://random-name-12345.netlify.app - Click "Open production deploy"
- Test all functionality
- Netlify assigns a random URL:
-
Check for Errors:
- Open browser DevTools (F12)
- Check Console for JavaScript errors
- Check Network tab for failed API calls
- Verify environment variables loaded correctly
- In Netlify dashboard, go to Site configuration → Environment variables
- Or from sidebar: Site settings → Environment variables
Click "Add a variable" → "Add a single variable"
Common environment variables:
Key: REACT_APP_API_URL
Value: https://api.yourdomain.com
Scopes: Production (or All scopes)
Key: REACT_APP_ENVIRONMENT
Value: production
Scopes: Production
Key: REACT_APP_VERSION
Value: 1.0.0
Scopes: All scopes
For different deploy contexts:
Production: Used for production deployments
Deploy previews: Used for pull request previews
Branch deploys: Used for branch deployments
Example:
# Production
REACT_APP_API_URL = https://api.yourdomain.com
# Deploy previews (PRs)
REACT_APP_API_URL = https://staging-api.yourdomain.com
# Branch deploys (dev branch)
REACT_APP_API_URL = https://dev-api.yourdomain.com
Environment variables are only available after redeployment:
- Go to Deploys tab
- Click "Trigger deploy" → "Clear cache and deploy site"
- Wait for deployment to complete
Verify variables:
Add a debug endpoint or console log (temporarily):
console.log('API URL:', process.env.REACT_APP_API_URL);Check browser console after deployment.
Site configuration → Build & deploy → Deploy contexts
Production branch:
Branch: main
Branch deploys:
- All branches: Deploy all branches (useful for testing)
- Only production branch: Deploy only main/master
- Let me add individual branches: Select specific branches
Deploy previews:
- Any pull request against your production branch: Recommended
- None: Disable PR previews
Create webhooks to trigger deployments:
- Go to Build hooks
- Click "Add build hook"
- Settings:
Build hook name: Deploy from external source Branch to build: main - Click "Save"
- Copy webhook URL:
https://api.netlify.com/build_hooks/xxxxx
Use cases:
- Trigger deploy from CI/CD pipeline
- Deploy when CMS content changes
- Scheduled deployments
Trigger via curl:
curl -X POST -d {} https://api.netlify.com/build_hooks/xxxxxAsset optimization:
Site configuration → Build & deploy → Post processing
✓ Bundle CSS: Combine CSS files
✓ Minify CSS: Minimize CSS file sizes
✓ Minify JS: Minimize JavaScript file sizes
✓ Compress images: Lossless image compression
✓ Pretty URLs: Strip .html extension from URLs
For React, Vue, Angular apps that use client-side routing:
Method 1: netlify.toml (Recommended)
Already configured in Step 1.4.
Method 2: _redirects file
Create public/_redirects (or in publish directory):
/* /index.html 200
Method 3: Netlify UI
Site configuration → Redirects and rewrites → Add rule
From: /*
To: /index.html
Status: 200
Proxy API requests through Netlify:
netlify.toml:
[[redirects]]
from = "/api/*"
to = "https://api.yourdomain.com/:splat"
status = 200
force = true
headers = {X-From = "Netlify"}Then in frontend code:
// Instead of:
fetch('https://api.yourdomain.com/users')
// Use:
fetch('/api/users')Benefits:
- Avoid CORS issues during development
- Hide actual API URL from client
- Easier URL management
Redirect HTTP to HTTPS:
[[redirects]]
from = "http://yourdomain.com/*"
to = "https://yourdomain.com/:splat"
status = 301
force = trueRedirect www to non-www:
[[redirects]]
from = "https://www.yourdomain.com/*"
to = "https://yourdomain.com/:splat"
status = 301
force = trueRedirect old paths:
[[redirects]]
from = "/old-page"
to = "/new-page"
status = 301
[[redirects]]
from = "/blog/*"
to = "/articles/:splat"
status = 301Language redirects based on geolocation:
[[redirects]]
from = "/*"
to = "/en/:splat"
status = 302
conditions = {Country = ["US", "CA", "GB"]}
[[redirects]]
from = "/*"
to = "/es/:splat"
status = 302
conditions = {Country = ["ES", "MX", "AR"]}Netlify provides built-in form handling:
Add netlify attribute to form:
<form name="contact" method="POST" data-netlify="true">
<input type="hidden" name="form-name" value="contact">
<input type="text" name="name" required>
<input type="email" name="email" required>
<textarea name="message" required></textarea>
<button type="submit">Send</button>
</form>function ContactForm() {
const handleSubmit = (e) => {
e.preventDefault();
const form = e.target;
fetch('/', {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams(new FormData(form)).toString()
})
.then(() => alert('Form submitted!'))
.catch((error) => alert(error));
};
return (
<form name="contact" method="POST" onSubmit={handleSubmit} data-netlify="true">
<input type="hidden" name="form-name" value="contact" />
<input type="text" name="name" required />
<input type="email" name="email" required />
<textarea name="message" required></textarea>
<button type="submit">Send</button>
</form>
);
}Site configuration → Forms → Form notifications
Configure email notifications or webhooks when form is submitted.
Enable reCAPTCHA or honeypot:
<form name="contact" method="POST" data-netlify="true" data-netlify-recaptcha="true">
<!-- form fields -->
<div data-netlify-recaptcha="true"></div>
<button type="submit">Send</button>
</form>Test all major features:
✓ Homepage loads
✓ Navigation works
✓ All pages accessible
✓ Images load correctly
✓ Forms submit (if applicable)
✓ API calls work
✓ Authentication works (if applicable)
✓ Responsive design (mobile/tablet)
Use Chrome DevTools Lighthouse or GTmetrix:
Check:
- Performance score
- First Contentful Paint
- Largest Contentful Paint
- Time to Interactive
- Total Blocking Time
- Cumulative Layout Shift
Test API endpoints from deployed frontend:
// Test health endpoint
fetch('https://api.yourdomain.com/health')
.then(res => res.json())
.then(data => console.log('Backend healthy:', data))
.catch(err => console.error('Backend error:', err));
// Test actual endpoint
fetch('https://api.yourdomain.com/api/users')
.then(res => res.json())
.then(data => console.log('Users:', data))
.catch(err => console.error('Error:', err));Check browser console for:
- CORS errors
- API errors
- Network failures
- Authentication issues
Test on multiple browsers:
- Chrome
- Firefox
- Safari
- Edge
- Mobile browsers (iOS Safari, Android Chrome)
Use tools:
- Go to Domain management → Domains
- Click "Add a domain"
- Enter your domain:
yourdomain.com - Click "Verify"
Netlify will check if you own the domain.
You have two options:
Option A: Use Netlify DNS (Recommended)
-
Netlify will show nameservers:
dns1.p01.nsone.net dns2.p01.nsone.net dns3.p01.nsone.net dns4.p01.nsone.net -
Update nameservers at your domain registrar (GoDaddy, Namecheap, etc.):
- Log in to your domain registrar
- Find DNS/Nameserver settings
- Replace existing nameservers with Netlify's nameservers
- Save changes
-
Wait for DNS propagation (2-48 hours, usually < 1 hour)
-
Verify:
nslookup yourdomain.com dig yourdomain.com
Option B: Use External DNS
Keep your current DNS provider and add CNAME record:
Type: CNAME
Name: www (or @)
Value: random-name-12345.netlify.app
TTL: 300
For apex/root domain (@):
Type: A
Name: @
Value: 75.2.60.5
TTL: 300
Or use ALIAS/ANAME record (if provider supports):
Type: ALIAS
Name: @
Value: random-name-12345.netlify.app
TTL: 300
Add www subdomain:
- In Netlify, go to Domain management → Domains
- Click "Add domain alias"
- Enter:
www.yourdomain.com - Netlify automatically redirects www to non-www (or vice versa)
Configure redirect preference:
- Primary domain:
yourdomain.com - Redirect:
www.yourdomain.com→yourdomain.com(or opposite)
Netlify automatically provisions SSL certificate:
- Domain management → HTTPS
- Wait for certificate provisioning (can take up to 24 hours)
- Status should show: "Netlify provides HTTPS for your site"
Force HTTPS redirect:
✓ Force HTTPS on all pages
HSTS (HTTP Strict Transport Security):
✓ Enable HSTS
Netlify Functions allow serverless backend functionality.
mkdir netlify/functionsCreate netlify/functions/hello.js:
exports.handler = async (event, context) => {
return {
statusCode: 200,
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*'
},
body: JSON.stringify({
message: 'Hello from Netlify Function!',
timestamp: new Date().toISOString()
})
};
};[build]
functions = "netlify/functions"After deployment, function available at:
https://yourdomain.com/.netlify/functions/hello
Call from frontend:
fetch('/.netlify/functions/hello')
.then(res => res.json())
.then(data => console.log(data));Create netlify/functions/get-users.js:
const { Client } = require('pg');
exports.handler = async (event, context) => {
// Database connection
const client = new Client({
host: process.env.DB_HOST,
port: process.env.DB_PORT,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
ssl: { rejectUnauthorized: false }
});
try {
await client.connect();
const result = await client.query('SELECT * FROM users LIMIT 10');
return {
statusCode: 200,
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*'
},
body: JSON.stringify(result.rows)
};
} catch (error) {
console.error('Database error:', error);
return {
statusCode: 500,
body: JSON.stringify({ error: 'Database query failed' })
};
} finally {
await client.end();
}
};Install dependencies:
cd netlify/functions
npm init -y
npm install pg- PostgreSQL - Most popular, feature-rich, open-source
- MySQL - Widely used, good performance
- MariaDB - MySQL fork with additional features
- Amazon Aurora - AWS-managed, MySQL/PostgreSQL compatible (more expensive but better performance)
- SQL Server - Microsoft SQL Server
- Oracle - Enterprise database
Recommendation: PostgreSQL for most applications
Via AWS Console:
-
Go to RDS → Databases → Create database
-
Choose creation method:
- Standard create (more options)
- Easy create (simplified)
-
Engine options:
Engine type: PostgreSQL Version: PostgreSQL 15.4 (or latest) -
Templates:
- Production (Multi-AZ, enhanced monitoring)
- Dev/Test (Single-AZ)
- Free tier (t3.micro, 20GB, single-AZ)
-
Settings:
DB instance identifier: my-app-db Master username: dbadmin Master password: YourSecurePassword123! Confirm password: YourSecurePassword123! -
Instance configuration:
DB instance class: db.t3.micro (Free tier) or db.t3.small Storage type: General Purpose SSD (gp3) Allocated storage: 20 GB Storage autoscaling: Enable (max: 100 GB) -
Availability & durability:
Multi-AZ deployment: - No (dev/test) - Yes (production - automatic failover) -
Connectivity:
VPC: Default VPC (or custom) Subnet group: default Public access: Yes (if accessing from outside AWS) No (if only from EC2/Lambda in same VPC) VPC security group: Create new - Name: rds-security-group Availability Zone: No preference -
Database authentication:
Password authentication (standard) Password and IAM database authentication (for IAM roles) Password and Kerberos authentication (for enterprise) -
Additional configuration:
Initial database name: myapp_db DB parameter group: default Option group: default Backup: - Enable automatic backups - Backup retention period: 7 days - Backup window: No preference Encryption: - Enable encryption (recommended for production) Monitoring: - Enable Enhanced Monitoring - Granularity: 60 seconds Maintenance: - Enable auto minor version upgrade Deletion protection: Enable (for production) -
Click Create database
Wait 5-15 minutes for database creation.
- Go to RDS → Databases → Select your database
- Click on VPC security groups link
- Edit inbound rules:
For public access (development):
Type: PostgreSQL (or MySQL)
Port: 5432 (PostgreSQL) or 3306 (MySQL)
Source: My IP (your current IP)
Description: Allow from my IP
For EC2 access:
Type: PostgreSQL
Port: 5432
Source: sg-xxxxx (EC2 security group ID)
Description: Allow from EC2 instances
For Lambda access (same VPC):
Type: PostgreSQL
Port: 5432
Source: sg-xxxxx (Lambda security group ID)
Description: Allow from Lambda functions
- Go to RDS → Databases → Select your database
- Copy Endpoint & port:
Endpoint: my-app-db.xxxxx.us-east-1.rds.amazonaws.com Port: 5432
From local machine (psql):
# Install PostgreSQL client
sudo apt install postgresql-client # Ubuntu
brew install postgresql # macOS
# Connect to database
psql -h my-app-db.xxxxx.us-east-1.rds.amazonaws.com \
-U dbadmin \
-d myapp_db \
-p 5432
# Enter password when promptedFrom application (Node.js):
const { Pool } = require('pg');
const pool = new Pool({
host: 'my-app-db.xxxxx.us-east-1.rds.amazonaws.com',
port: 5432,
database: 'myapp_db',
user: 'dbadmin',
password: 'YourSecurePassword123!',
ssl: {
rejectUnauthorized: false
},
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
});
// Test connection
pool.query('SELECT NOW()', (err, res) => {
if (err) {
console.error('Database connection error:', err);
} else {
console.log('Database connected:', res.rows[0].now);
}
});From application (Python):
import psycopg2
conn = psycopg2.connect(
host="my-app-db.xxxxx.us-east-1.rds.amazonaws.com",
port=5432,
database="myapp_db",
user="dbadmin",
password="YourSecurePassword123!",
sslmode="require"
)
# Test connection
cur = conn.cursor()
cur.execute('SELECT version()')
version = cur.fetchone()
print(f'PostgreSQL version: {version}')
cur.close()
conn.close()-- Connect to database
\c myapp_db
-- Create users table
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
username VARCHAR(100) UNIQUE NOT NULL,
password_hash VARCHAR(255) NOT NULL,
first_name VARCHAR(100),
last_name VARCHAR(100),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Create index
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_username ON users(username);
-- Create posts table
CREATE TABLE posts (
id SERIAL PRIMARY KEY,
user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
title VARCHAR(255) NOT NULL,
content TEXT,
published BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Create index
CREATE INDEX idx_posts_user_id ON posts(user_id);
-- Verify tables created
\dtInstall TypeORM:
npm install typeorm pg reflect-metadataCreate ormconfig.json:
{
"type": "postgres",
"host": "my-app-db.xxxxx.us-east-1.rds.amazonaws.com",
"port": 5432,
"username": "dbadmin",
"password": "YourSecurePassword123!",
"database": "myapp_db",
"synchronize": false,
"logging": true,
"entities": ["src/entities/**/*.ts"],
"migrations": ["src/migrations/**/*.ts"],
"subscribers": ["src/subscribers/**/*.ts"],
"cli": {
"entitiesDir": "src/entities",
"migrationsDir": "src/migrations",
"subscribersDir": "src/subscribers"
},
"ssl": {
"rejectUnauthorized": false
}
}Create migration:
npx typeorm migration:create -n InitialSchemaRun migration:
npx typeorm migration:runAutomated backups (configured during creation):
- RDS automatically backs up database
- Retention: 1-35 days
- Point-in-time recovery available
Manual snapshot:
- Go to RDS → Databases → Select database
- Actions → Take snapshot
- Enter snapshot name
- Click Take snapshot
Restore from snapshot:
- Go to RDS → Snapshots
- Select snapshot
- Actions → Restore snapshot
- Configure new instance settings
- Click Restore DB instance
Export database (pg_dump):
# Export to SQL file
pg_dump -h my-app-db.xxxxx.us-east-1.rds.amazonaws.com \
-U dbadmin \
-d myapp_db \
-F c \
-f backup.dump
# Restore from SQL file
pg_restore -h my-app-db.xxxxx.us-east-1.rds.amazonaws.com \
-U dbadmin \
-d myapp_db \
-F c \
backup.dumpVia AWS Console:
-
Go to DynamoDB → Tables → Create table
-
Table details:
Table name: Users Partition key: userId (String) Sort key: (optional) timestamp (Number) -
Table settings:
- Customize settings
- Default settings (easier)
-
Table class:
- DynamoDB Standard (frequent access)
- DynamoDB Standard-IA (infrequent access, cheaper)
-
Capacity mode:
- On-demand: Pay per request (good for unpredictable traffic)
- Provisioned: Set read/write capacity units (cheaper for predictable traffic)
- Read capacity: 5 units
- Write capacity: 5 units
- Auto scaling: Enable
-
Encryption:
- Owned by Amazon DynamoDB (free)
- AWS managed key (KMS - additional cost)
- Customer managed key (KMS - most control)
-
Click Create table
- Select your table
- Indexes tab → Create index
- Settings:
Partition key: email (String) Sort key: (optional) Index name: EmailIndex Projected attributes: All - Click Create index
const AWS = require('aws-sdk');
// Configure AWS SDK
AWS.config.update({
region: 'us-east-1',
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
});
const dynamodb = new AWS.DynamoDB.DocumentClient();
// Put item
const putItem = async (userId, data) => {
const params = {
TableName: 'Users',
Item: {
userId: userId,
email: data.email,
username: data.username,
createdAt: Date.now()
}
};
try {
await dynamodb.put(params).promise();
console.log('Item added successfully');
} catch (error) {
console.error('Error adding item:', error);
}
};
// Get item
const getItem = async (userId) => {
const params = {
TableName: 'Users',
Key: {
userId: userId
}
};
try {
const result = await dynamodb.get(params).promise();
return result.Item;
} catch (error) {
console.error('Error getting item:', error);
}
};
// Query by GSI
const getUserByEmail = async (email) => {
const params = {
TableName: 'Users',
IndexName: 'EmailIndex',
KeyConditionExpression: 'email = :email',
ExpressionAttributeValues: {
':email': email
}
};
try {
const result = await dynamodb.query(params).promise();
return result.Items[0];
} catch (error) {
console.error('Error querying by email:', error);
}
};
// Update item
const updateItem = async (userId, updates) => {
const params = {
TableName: 'Users',
Key: {
userId: userId
},
UpdateExpression: 'set username = :username, updatedAt = :updatedAt',
ExpressionAttributeValues: {
':username': updates.username,
':updatedAt': Date.now()
},
ReturnValues: 'ALL_NEW'
};
try {
const result = await dynamodb.update(params).promise();
return result.Attributes;
} catch (error) {
console.error('Error updating item:', error);
}
};
// Delete item
const deleteItem = async (userId) => {
const params = {
TableName: 'Users',
Key: {
userId: userId
}
};
try {
await dynamodb.delete(params).promise();
console.log('Item deleted successfully');
} catch (error) {
console.error('Error deleting item:', error);
}
};Create Cluster:
- Go to Amazon DocumentDB → Clusters → Create
- Configuration:
Cluster identifier: my-mongodb-cluster Engine version: 5.0 Instance class: db.t3.medium Number of instances: 1 (or 3 for production) Authentication: Username and password Username: dbadmin Password: YourSecurePassword123! VPC: Default Subnet group: default - Click Create cluster
Connection string:
mongodb://dbadmin:YourSecurePassword123!@my-mongodb-cluster.cluster-xxxxx.us-east-1.docdb.amazonaws.com:27017/?ssl=true&replicaSet=rs0&readPreference=secondaryPreferred
# SSH into EC2
ssh -i ~/.ssh/my-key.pem ubuntu@YOUR_EC2_IP
# Import MongoDB public GPG key
curl -fsSL https://www.mongodb.org/static/pgp/server-7.0.asc | \
sudo gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg --dearmor
# Add MongoDB repository
echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse" | \
sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list
# Update package database
sudo apt update
# Install MongoDB
sudo apt install -y mongodb-org
# Start MongoDB
sudo systemctl start mongod
sudo systemctl enable mongod
# Check status
sudo systemctl status mongod
# Secure MongoDB
mongo
> use admin
> db.createUser({
user: "admin",
pwd: "YourSecurePassword123!",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
})
> exit
# Edit MongoDB config
sudo nano /etc/mongod.conf
# Enable authentication
security:
authorization: enabled
# Allow remote connections
net:
bindIp: 0.0.0.0
port: 27017
# Restart MongoDB
sudo systemctl restart mongodConnection string:
mongodb://admin:YourSecurePassword123!@YOUR_EC2_IP:27017/myapp_db?authSource=admin
Choose a domain registrar:
- Namecheap - Affordable, good support
- GoDaddy - Popular, easy to use
- Google Domains - Simple interface
- AWS Route 53 - Integrated with AWS
- Cloudflare - Free DNS, good performance
If using Netlify DNS:
Netlify automatically configures:
A @ 75.2.60.5
CNAME www yourdomain.netlify.app
If using external DNS:
Add these records:
# Apex domain (yourdomain.com)
Type: A
Name: @
Value: 75.2.60.5
TTL: 300
# www subdomain
Type: CNAME
Name: www
Value: random-name-12345.netlify.app
TTL: 300
Or use ALIAS/ANAME for apex:
Type: ALIAS
Name: @
Value: random-name-12345.netlify.app
TTL: 300
If using Elastic IP:
Type: A
Name: api
Value: 54.123.45.67 (your Elastic IP)
TTL: 300
If using Load Balancer:
Type: CNAME
Name: api
Value: my-backend-alb-xxxxx.us-east-1.elb.amazonaws.com
TTL: 300
Or use ALIAS (Route 53 only):
Type: A (Alias)
Name: api
Value: ALB DNS name
TTL: 300
If using API Gateway:
Type: CNAME
Name: api
Value: d-xxxxxxxxxx.execute-api.us-east-1.amazonaws.com
TTL: 300
For sending emails from your domain:
MX Records (for receiving):
Type: MX
Name: @
Priority: 10
Value: mail.yourdomain.com
TTL: 3600
SPF Record (prevent spoofing):
Type: TXT
Name: @
Value: v=spf1 include:_spf.google.com ~all
TTL: 3600
DKIM Record (email authentication):
Type: TXT
Name: default._domainkey
Value: v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA...
TTL: 3600
DMARC Record (email policy):
Type: TXT
Name: _dmarc
Value: v=DMARC1; p=quarantine; rua=mailto:[email protected]
TTL: 3600
# Check A record
dig yourdomain.com A
# Check CNAME record
dig www.yourdomain.com CNAME
dig api.yourdomain.com CNAME
# Check from different locations
nslookup yourdomain.com 8.8.8.8 # Google DNS
nslookup yourdomain.com 1.1.1.1 # Cloudflare DNS
# Online tools
# https://dnschecker.org
# https://www.whatsmydns.netDNS propagation typically takes:
- Minutes to hours for most changes
- Up to 48 hours for nameserver changes
- Faster with lower TTL values
Netlify automatically provisions SSL certificates via Let's Encrypt.
Enable HTTPS:
- Go to Domain management → HTTPS
- Netlify provisions certificate automatically (5-24 hours)
- Enable "Force HTTPS"
- Enable "HSTS" (recommended)
Verify SSL:
curl -I https://yourdomain.com
openssl s_client -connect yourdomain.com:443 -servername yourdomain.comVia AWS Console:
- Go to ACM (Certificate Manager)
- Ensure you're in us-east-1 region (required for CloudFront and API Gateway)
- Click "Request a certificate"
- Certificate type:
- Public certificate (for public domains)
- Private certificate (for internal use)
- Domain names:
api.yourdomain.com *.yourdomain.com (wildcard - optional) - Validation method:
- DNS validation (recommended, automatic renewal)
- Email validation (manual)
- Click "Request"
DNS Validation:
- ACM provides CNAME records for validation
- Add CNAME record to your DNS:
Type: CNAME Name: _xxxxx.api.yourdomain.com Value: _xxxxx.acm-validations.aws TTL: 300 - Or click "Create records in Route 53" (if using Route 53)
- Wait for validation (5-30 minutes)
- Status changes to "Issued"
Email Validation:
- ACM sends emails to domain administrators
- Click validation link in email
- Certificate status changes to "Issued"
For EC2 with Nginx:
Export certificate from ACM is not possible. Use Let's Encrypt instead (see EC2 section).
For Load Balancer:
- Go to EC2 → Load Balancers → Select your ALB
- Listeners → Add listener
- Protocol: HTTPS
- Port: 443
- Default SSL certificate: Select from ACM
- Security policy: ELBSecurityPolicy-2016-08
- Default action: Forward to target group
- Save
For API Gateway:
- Go to API Gateway → Custom domain names
- Select your domain
- Configurations:
- ACM certificate: Select your certificate
- Endpoint type: Regional or Edge
- Save
For CloudFront:
- Go to CloudFront → Distributions → Select distribution
- General → Edit
- SSL Certificate:
- Custom SSL Certificate
- Select certificate from ACM (must be in us-east-1)
- Save
Nginx:
server {
listen 80;
server_name api.yourdomain.com;
return 301 https://$server_name$request_uri;
}Load Balancer:
- Listener HTTP:80 → Redirect to HTTPS:443
Netlify:
- Enable "Force HTTPS" in settings
Nginx:
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;Netlify:
- Enable HSTS in Domain settings
Load Balancer:
- Add custom header in target group attributes
Let's Encrypt (Nginx/EC2):
- Auto-renews via Certbot systemd timer
- Verify:
sudo systemctl status certbot.timer
ACM:
- Auto-renews if using DNS validation
- No action required
Manual renewal (Let's Encrypt):
sudo certbot renew
sudo systemctl reload nginxUse SSL testing tools:
SSL Labs:
https://www.ssllabs.com/ssltest/analyze.html?d=api.yourdomain.com
Target: A or A+ rating
Common issues:
- Weak cipher suites
- Missing intermediate certificates
- Incorrect certificate chain
- No HSTS header
- Vulnerable to known attacks
- Site configuration → Environment variables
- Click "Add a variable"
- Add key-value pairs:
Common variables:
REACT_APP_API_URL = https://api.yourdomain.com
REACT_APP_ENV = production
REACT_APP_GA_TRACKING_ID = UA-XXXXXXXXX-X
REACT_APP_STRIPE_PUBLIC_KEY = pk_live_xxxxx
REACT_APP_SENTRY_DSN = https://[email protected]/xxxxx
For Next.js:
NEXT_PUBLIC_API_URL = https://api.yourdomain.com
NEXT_PUBLIC_ENV = production
NEXT_PUBLIC_ANALYTICS_ID = G-XXXXXXXXXX
For Vue:
VUE_APP_API_URL = https://api.yourdomain.com
VUE_APP_ENV = production
Set different values for different contexts:
Production:
REACT_APP_API_URL = https://api.yourdomain.com
Deploy Previews:
REACT_APP_API_URL = https://staging-api.yourdomain.com
Branch Deploys (dev):
REACT_APP_API_URL = https://dev-api.yourdomain.com
- Database credentials
- API secret keys (only public keys)
- AWS access keys
- Private API keys
Use backend for sensitive operations.
Method 1: .env file
# Create .env file
nano /var/www/backend/.env
# Add variables
NODE_ENV=production
PORT=3000
DB_HOST=my-db.xxxxx.rds.amazonaws.com
DB_PASSWORD=SecurePassword123!
JWT_SECRET=your-secret-key-min-32-charsSecure file:
chmod 600 /var/www/backend/.envMethod 2: Export in shell
# Add to .bashrc or .profile
export NODE_ENV=production
export DB_HOST=my-db.xxxxx.rds.amazonaws.comMethod 3: systemd environment file
Create /etc/environment.d/backend.conf:
NODE_ENV=production
DB_HOST=my-db.xxxxx.rds.amazonaws.com
Or in systemd service file:
[Service]
EnvironmentFile=/var/www/backend/.envVia EB CLI:
eb setenv \
NODE_ENV=production \
DB_HOST=my-db.xxxxx.rds.amazonaws.com \
DB_PASSWORD=SecurePassword123! \
JWT_SECRET=your-secret-keyVia AWS Console:
- Elastic Beanstalk → Environments → Select environment
- Configuration → Software → Edit
- Environment properties → Add variables
- Apply
In serverless.yml:
provider:
environment:
NODE_ENV: production
DB_HOST: ${env:DB_HOST}
JWT_SECRET: ${env:JWT_SECRET}Via AWS Console:
- Lambda → Functions → Select function
- Configuration → Environment variables → Edit
- Add key-value pairs
- Save
In task definition JSON:
{
"containerDefinitions": [
{
"environment": [
{
"name": "NODE_ENV",
"value": "production"
},
{
"name": "PORT",
"value": "3000"
}
],
"secrets": [
{
"name": "DB_PASSWORD",
"valueFrom": "arn:aws:secretsmanager:region:account:secret:name"
}
]
}
]
}Via AWS Console:
- Go to Secrets Manager → Secrets → Store a new secret
- Secret type:
- Credentials for RDS database
- Other type of secret (custom)
- Key/value pairs:
DB_PASSWORD: SecurePassword123! JWT_SECRET: your-jwt-secret-key API_KEY: your-api-key - Secret name:
prod/backend/config - Encryption key: Default (or custom KMS key)
- Rotation: Enable (optional)
- Store
Node.js:
const AWS = require('aws-sdk');
const secretsManager = new AWS.SecretsManager({ region: 'us-east-1' });
async function getSecret(secretName) {
try {
const data = await secretsManager.getSecretValue({
SecretId: secretName
}).promise();
return JSON.parse(data.SecretString);
} catch (error) {
console.error('Error retrieving secret:', error);
throw error;
}
}
// Use in app initialization
(async () => {
const secrets = await getSecret('prod/backend/config');
process.env.DB_PASSWORD = secrets.DB_PASSWORD;
process.env.JWT_SECRET = secrets.JWT_SECRET;
// Start application
startApp();
})();Python:
import boto3
import json
def get_secret(secret_name, region='us-east-1'):
client = boto3.client('secretsmanager', region_name=region)
try:
response = client.get_secret_value(SecretId=secret_name)
return json.loads(response['SecretString'])
except Exception as e:
print(f'Error retrieving secret: {e}')
raise
# Use in app
secrets = get_secret('prod/backend/config')
db_password = secrets['DB_PASSWORD']
jwt_secret = secrets['JWT_SECRET']Add to EC2/Lambda IAM role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/backend/*"
}
]
}# Use UPPER_CASE with underscores
DATABASE_URL
API_KEY
JWT_SECRET
# Prefix by framework (frontend)
REACT_APP_API_URL
NEXT_PUBLIC_API_URL
VUE_APP_API_URL
VITE_API_URL
# Organize by category
DB_HOST
DB_PORT
DB_NAME
DB_USERNAME
DB_PASSWORD
AWS_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
SMTP_HOST
SMTP_PORT
SMTP_USER
SMTP_PASSWORD
✓ Never commit .env files to git
✓ Use Secrets Manager for sensitive data
✓ Rotate secrets regularly
✓ Use different secrets for each environment
✓ Limit IAM permissions to minimum required
✓ Never log secret values
✓ Use encrypted connections for secret retrieval
✗ Don't hardcode secrets in code
✗ Don't expose secrets in frontend
✗ Don't use same secrets across environments
Validate environment variables on app startup:
const requiredEnvVars = [
'NODE_ENV',
'PORT',
'DB_HOST',
'DB_PASSWORD',
'JWT_SECRET',
'FRONTEND_URL'
];
for (const envVar of requiredEnvVars) {
if (!process.env[envVar]) {
console.error(`Missing required environment variable: ${envVar}`);
process.exit(1);
}
}
console.log('✓ All required environment variables are set');Cross-Origin Resource Sharing (CORS) allows frontend (Netlify) to make requests to backend (AWS) on different domains.
Same-Origin Policy blocks requests like:
Frontend: https://yourdomain.com
Backend: https://api.yourdomain.com ← Blocked without CORS
Install cors package:
npm install corsBasic configuration:
const express = require('express');
const cors = require('cors');
const app = express();
// Enable CORS for all origins (development only)
app.use(cors());
// Start server
app.listen(3000);Production configuration:
const express = require('express');
const cors = require('cors');
const app = express();
// Configure CORS
const corsOptions = {
origin: [
'https://yourdomain.com',
'https://www.yourdomain.com'
],
credentials: true,
optionsSuccessStatus: 200,
methods: ['GET', 'POST', 'PUT', 'DELETE', 'PATCH', 'OPTIONS'],
allowedHeaders: [
'Content-Type',
'Authorization',
'X-Requested-With',
'Accept',
'Origin'
],
exposedHeaders: ['Content-Range', 'X-Content-Range'],
maxAge: 86400 // 24 hours
};
app.use(cors(corsOptions));
// Handle preflight requests
app.options('*', cors(corsOptions));
app.listen(3000);Dynamic origin (environment-based):
const allowedOrigins = process.env.CORS_ORIGINS
? process.env.CORS_ORIGINS.split(',')
: ['http://localhost:3000'];
const corsOptions = {
origin: function (origin, callback) {
// Allow requests with no origin (mobile apps, curl, etc.)
if (!origin) return callback(null, true);
if (allowedOrigins.indexOf(origin) !== -1) {
callback(null, true);
} else {
callback(new Error('Not allowed by CORS'));
}
},
credentials: true
};
app.use(cors(corsOptions));main.ts:
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
// Enable CORS
app.enableCors({
origin: [
'https://yourdomain.com',
'https://www.yourdomain.com',
/\.yourdomain\.com$/ // Regex for all subdomains
],
credentials: true,
methods: ['GET', 'POST', 'PUT', 'DELETE', 'PATCH', 'OPTIONS'],
allowedHeaders: [
'Content-Type',
'Authorization',
'X-Requested-With'
]
});
await app.listen(3000);
}
bootstrap();Install flask-cors:
pip install flask-corsfrom flask import Flask
from flask_cors import CORS
app = Flask(__name__)
# Enable CORS
CORS(app, origins=[
'https://yourdomain.com',
'https://www.yourdomain.com'
], supports_credentials=True)
# Or with more options
CORS(app, resources={
r"/api/*": {
"origins": ["https://yourdomain.com"],
"methods": ["GET", "POST", "PUT", "DELETE"],
"allow_headers": ["Content-Type", "Authorization"],
"expose_headers": ["Content-Range"],
"max_age": 86400
}
})
@app.route('/api/users')
def get_users():
return {'users': []}
if __name__ == '__main__':
app.run()Install django-cors-headers:
pip install django-cors-headerssettings.py:
INSTALLED_APPS = [
...
'corsheaders',
]
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware', # Must be before CommonMiddleware
'django.middleware.common.CommonMiddleware',
...
]
# CORS settings
CORS_ALLOWED_ORIGINS = [
'https://yourdomain.com',
'https://www.yourdomain.com',
]
CORS_ALLOW_CREDENTIALS = True
CORS_ALLOW_METHODS = [
'GET',
'POST',
'PUT',
'PATCH',
'DELETE',
'OPTIONS',
]
CORS_ALLOW_HEADERS = [
'accept',
'accept-encoding',
'authorization',
'content-type',
'dnt',
'origin',
'user-agent',
'x-csrftoken',
'x-requested-with',
]from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
app = FastAPI()
# Configure CORS
app.add_middleware(
CORSMiddleware,
allow_origins=[
"https://yourdomain.com",
"https://www.yourdomain.com"
],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
expose_headers=["Content-Range"],
max_age=86400
)
@app.get("/api/users")
async def get_users():
return {"users": []}package main
import (
"github.com/gin-gonic/gin"
"github.com/gin-contrib/cors"
)
func main() {
router := gin.Default()
// Configure CORS
config := cors.Config{
AllowOrigins: []string{
"https://yourdomain.com",
"https://www.yourdomain.com",
},
AllowMethods: []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"},
AllowHeaders: []string{"Origin", "Content-Type", "Authorization"},
ExposeHeaders: []string{"Content-Length"},
AllowCredentials: true,
MaxAge: 86400,
}
router.Use(cors.New(config))
router.GET("/api/users", func(c *gin.Context) {
c.JSON(200, gin.H{"users": []string{}})
})
router.Run(":3000")
}If using Nginx as reverse proxy, you can handle CORS there:
server {
listen 443 ssl http2;
server_name api.yourdomain.com;
location / {
# Proxy to backend
proxy_pass http://localhost:3000;
# CORS headers
add_header 'Access-Control-Allow-Origin' 'https://yourdomain.com' always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always;
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
# Handle preflight requests
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' 'https://yourdomain.com' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always;
add_header 'Access-Control-Max-Age' 86400;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
# Standard proxy headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}Enable CORS in API Gateway:
- Go to API Gateway → Select your API
- Select resource/method
- Actions → Enable CORS
- Configure:
Access-Control-Allow-Origins: https://yourdomain.com Access-Control-Allow-Headers: Content-Type,Authorization Access-Control-Allow-Methods: GET,POST,PUT,DELETE,OPTIONS Access-Control-Allow-Credentials: true - Enable CORS and replace existing headers
- Deploy API
Manual CORS configuration:
Add OPTIONS method:
exports.handler = async (event) => {
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': 'https://yourdomain.com',
'Access-Control-Allow-Headers': 'Content-Type,Authorization',
'Access-Control-Allow-Methods': 'GET,POST,PUT,DELETE,OPTIONS',
'Access-Control-Allow-Credentials': 'true'
},
body: ''
};
};// Test from browser console (on yourdomain.com)
fetch('https://api.yourdomain.com/api/users', {
method: 'GET',
headers: {
'Content-Type': 'application/json'
},
credentials: 'include'
})
.then(res => res.json())
.then(data => console.log(data))
.catch(err => console.error('CORS error:', err));Check Network tab for:
- Preflight OPTIONS request
- Response headers (Access-Control-Allow-Origin, etc.)
- CORS errors in console
# Test simple request
curl -H "Origin: https://yourdomain.com" \
-H "Access-Control-Request-Method: GET" \
-H "Access-Control-Request-Headers: Content-Type" \
-X OPTIONS \
https://api.yourdomain.com/api/users \
-v
# Check for headers in response:
# Access-Control-Allow-Origin: https://yourdomain.com
# Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONSIssue 1: "No 'Access-Control-Allow-Origin' header"
Solution: Add CORS middleware to backend
Issue 2: "CORS policy: Credentials flag is 'true', but Access-Control-Allow-Credentials is missing"
Solution: Add credentials: true to CORS config
Issue 3: "CORS preflight request returns 401 Unauthorized"
Solution: Allow OPTIONS requests without authentication
Issue 4: "Wildcard '*' cannot be used when credentials are true"
Solution: Specify exact origin instead of '*'
Issue 5: Custom headers blocked
Solution: Add headers to Access-Control-Allow-Headers
✓ Homepage loads correctly
✓ All pages accessible (no 404 errors)
✓ Navigation works (all links)
✓ Images load properly
✓ CSS styles applied correctly
✓ JavaScript bundles load
✓ Forms validate and submit
✓ Client-side routing works (SPA)
✓ 404 page displays for invalid routes
✓ Meta tags correct (SEO)
✓ Favicon displays
✓ Mobile responsive design
✓ Different browsers (Chrome, Firefox, Safari, Edge)
✓ Different devices (desktop, tablet, mobile)
✓ Health endpoint responds: /health or /api/health
✓ All API endpoints respond correctly
✓ Authentication works (login, signup, logout)
✓ Authorization works (protected routes)
✓ Database connections established
✓ Database queries work correctly
✓ File uploads work (if applicable)
✓ Email sending works (if applicable)
✓ Third-party API integrations work
✓ Error handling works correctly
✓ Rate limiting works (if implemented)
✓ CORS headers present
✓ HTTPS redirect works
✓ SSL certificate valid
Test frontend-backend integration:
// Test API connectivity
const testAPI = async () => {
try {
// Health check
const healthRes = await fetch('https://api.yourdomain.com/health');
console.log('Health:', await healthRes.json());
// GET request
const usersRes = await fetch('https://api.yourdomain.com/api/users');
console.log('Users:', await usersRes.json());
// POST request
const loginRes = await fetch('https://api.yourdomain.com/api/auth/login', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
email: '[email protected]',
password: 'password123'
})
});
console.log('Login:', await loginRes.json());
console.log('✓ All tests passed');
} catch (error) {
console.error('✗ Test failed:', error);
}
};
testAPI();Use Lighthouse (Chrome DevTools):
- Open Chrome DevTools (F12)
- Go to Lighthouse tab
- Select categories: Performance, Accessibility, Best Practices, SEO
- Click Analyze page load
Target scores:
- Performance: > 90
- Accessibility: > 90
- Best Practices: > 90
- SEO: > 90
Key metrics:
- First Contentful Paint (FCP): < 1.8s
- Largest Contentful Paint (LCP): < 2.5s
- Time to Interactive (TTI): < 3.8s
- Total Blocking Time (TBT): < 200ms
- Cumulative Layout Shift (CLS): < 0.1
Optimization tips:
- Minimize JavaScript bundles
- Lazy load images
- Use code splitting
- Enable compression
- Leverage browser caching
- Use CDN (Netlify provides this)
- Optimize images (WebP format)
- Preload critical resources
GTmetrix:
Test at: https://gtmetrix.com/
Provides:
- Performance scores
- Page load time
- Total page size
- Number of requests
- Waterfall chart
- Recommendations
WebPageTest:
Test at: https://www.webpagetest.org/
Provides:
- Multi-location testing
- Connection speed simulation
- Filmstrip view
- Video capture
- Detailed metrics
Load Testing with Apache Bench:
# Install Apache Bench
sudo apt install apache2-utils # Ubuntu
brew install httpd # macOS
# Simple load test (100 requests, 10 concurrent)
ab -n 100 -c 10 https://api.yourdomain.com/api/users
# With authentication header
ab -n 1000 -c 50 -H "Authorization: Bearer YOUR_TOKEN" \
https://api.yourdomain.com/api/users
# POST request with JSON
ab -n 100 -c 10 -p data.json -T application/json \
https://api.yourdomain.com/api/loginLoad Testing with Artillery:
Install:
npm install -g artilleryCreate load-test.yml:
config:
target: 'https://api.yourdomain.com'
phases:
- duration: 60
arrivalRate: 10
name: "Warm up"
- duration: 120
arrivalRate: 50
name: "Ramp up"
- duration: 60
arrivalRate: 100
name: "Sustained load"
defaults:
headers:
Content-Type: 'application/json'
scenarios:
- name: "Get users"
flow:
- get:
url: "/api/users"
- name: "Login and get profile"
flow:
- post:
url: "/api/auth/login"
json:
email: "[email protected]"
password: "password123"
capture:
- json: "$.token"
as: "token"
- get:
url: "/api/profile"
headers:
Authorization: "Bearer {{ token }}"Run test:
artillery run load-test.ymlLoad Testing with k6:
Install:
# macOS
brew install k6
# Ubuntu
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
sudo apt-get update
sudo apt-get install k6Create load-test.js:
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
stages: [
{ duration: '30s', target: 20 }, // Ramp up
{ duration: '1m', target: 50 }, // Stay at 50
{ duration: '30s', target: 0 }, // Ramp down
],
thresholds: {
http_req_duration: ['p(95)<500'], // 95% of requests under 500ms
http_req_failed: ['rate<0.01'], // Error rate under 1%
},
};
export default function () {
// Test GET endpoint
let res = http.get('https://api.yourdomain.com/api/users');
check(res, {
'status is 200': (r) => r.status === 200,
'response time < 500ms': (r) => r.timings.duration < 500,
});
sleep(1);
}Run test:
k6 run load-test.jsMonitor query performance:
PostgreSQL:
-- Enable query logging
ALTER SYSTEM SET log_min_duration_statement = 100; -- Log queries > 100ms
SELECT pg_reload_conf();
-- View slow queries
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;
-- Check index usage
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
ORDER BY idx_scan;
-- Analyze table
ANALYZE users;
-- Explain query plan
EXPLAIN ANALYZE SELECT * FROM users WHERE email = '[email protected]';MySQL:
-- Enable slow query log
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 0.1;
-- View slow queries
SELECT * FROM mysql.slow_log
ORDER BY query_time DESC
LIMIT 10;
-- Explain query
EXPLAIN SELECT * FROM users WHERE email = '[email protected]';SSL Labs:
https://www.ssllabs.com/ssltest/analyze.html?d=yourdomain.com
https://www.ssllabs.com/ssltest/analyze.html?d=api.yourdomain.com
Target: A or A+ rating
Check certificate:
# Check certificate expiration
echo | openssl s_client -servername api.yourdomain.com \
-connect api.yourdomain.com:443 2>/dev/null | \
openssl x509 -noout -dates
# Check certificate details
echo | openssl s_client -servername api.yourdomain.com \
-connect api.yourdomain.com:443 2>/dev/null | \
openssl x509 -noout -textTest security headers:
Use: https://securityheaders.com/
Check for:
- Content-Security-Policy
- X-Frame-Options
- X-Content-Type-Options
- Strict-Transport-Security
- Referrer-Policy
- Permissions-Policy
Add headers in Nginx:
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline';" always;
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;Add headers in Netlify (netlify.toml):
[[headers]]
for = "/*"
[headers.values]
X-Frame-Options = "SAMEORIGIN"
X-Content-Type-Options = "nosniff"
X-XSS-Protection = "1; mode=block"
Referrer-Policy = "strict-origin-when-cross-origin"
Content-Security-Policy = "default-src 'self'; script-src 'self' 'unsafe-inline';"OWASP ZAP:
Download: https://www.zaproxy.org/
- Open ZAP
- Automated Scan → Enter URL
- Attack → Start scan
- Review findings and fix issues
npm audit (Node.js):
# Scan for vulnerabilities
npm audit
# Fix vulnerabilities
npm audit fix
# Force fix (may break)
npm audit fix --forceSnyk:
# Install Snyk CLI
npm install -g snyk
# Authenticate
snyk auth
# Test for vulnerabilities
snyk test
# Monitor project
snyk monitorCommon tests:
# SQL Injection
curl "https://api.yourdomain.com/api/users?id=1' OR '1'='1"
# XSS
curl "https://api.yourdomain.com/api/search?q=<script>alert('XSS')</script>"
# Path Traversal
curl "https://api.yourdomain.com/api/files?path=../../etc/passwd"
# Authentication bypass
curl "https://api.yourdomain.com/api/admin" -H "Authorization: Bearer fake_token"UptimeRobot (Free):
- Sign up: https://uptimerobot.com/
- Add New Monitor:
Monitor Type: HTTP(s) Friendly Name: My Website URL: https://yourdomain.com Monitoring Interval: 5 minutes - Add alert contacts (email, SMS, Slack)
Pingdom:
Similar to UptimeRobot, with more features.
AWS CloudWatch Synthetics:
Create canary to monitor:
# Via AWS Console
# CloudWatch → Synthetics → Create canary
# Choose blueprint: Heartbeat monitoring
# Enter URL: https://yourdomain.com
# Schedule: Every 5 minutesSentry:
Install:
npm install @sentry/node # Backend
npm install @sentry/react # FrontendBackend (Node.js):
const Sentry = require('@sentry/node');
Sentry.init({
dsn: 'https://[email protected]/xxxxx',
environment: process.env.NODE_ENV,
tracesSampleRate: 1.0,
});
// Capture errors
app.use(Sentry.Handlers.errorHandler());Frontend (React):
import * as Sentry from '@sentry/react';
Sentry.init({
dsn: 'https://[email protected]/xxxxx',
environment: process.env.NODE_ENV,
integrations: [new Sentry.BrowserTracing()],
tracesSampleRate: 1.0,
});AWS CloudWatch:
Automatically monitors:
- EC2 metrics (CPU, memory, disk, network)
- RDS metrics (connections, queries, storage)
- Lambda metrics (invocations, duration, errors)
- Load balancer metrics (requests, latency, errors)
New Relic:
Install agent:
npm install newrelicConfigure newrelic.js:
exports.config = {
app_name: ['My Backend API'],
license_key: 'YOUR_LICENSE_KEY',
logging: {
level: 'info'
}
};Require in app:
require('newrelic');
const express = require('express');
// ... rest of appAWS CloudWatch Logs:
Already configured for:
- Lambda functions (automatic)
- ECS tasks (via awslogs driver)
- EC2 (via CloudWatch agent)
View logs:
# Install CloudWatch Logs CLI
pip install awslogs
# View logs
awslogs get /aws/lambda/my-function --start='1h ago'
awslogs get /ecs/my-app --watchLogtail (formerly Timber.io):
Install:
npm install @logtail/nodeconst { Logtail } = require('@logtail/node');
const logtail = new Logtail('YOUR_SOURCE_TOKEN');
logtail.info('Application started');
logtail.error('Error occurred', { error: err });(Document continues in DEPLOYMENT_NETLIFY_AWS_PART3.md due to length...)
This is the final continuation of the deployment manual
Create .github/workflows/deploy.yml:
name: Deploy to Production
on:
push:
branches:
- main
pull_request:
branches:
- main
env:
NODE_VERSION: '20'
AWS_REGION: us-east-1
jobs:
# Frontend deployment (Netlify)
deploy-frontend:
name: Deploy Frontend to Netlify
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Build application
env:
REACT_APP_API_URL: ${{ secrets.REACT_APP_API_URL }}
run: npm run build
- name: Deploy to Netlify
uses: netlify/actions/cli@master
with:
args: deploy --prod --dir=build
env:
NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
# Backend deployment (EC2)
deploy-backend-ec2:
name: Deploy Backend to EC2
runs-on: ubuntu-latest
needs: deploy-frontend
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: ${{ env.NODE_VERSION }}
- name: Run tests
run: |
cd backend
npm ci
npm test
- name: Deploy to EC2
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.EC2_HOST }}
username: ubuntu
key: ${{ secrets.EC2_SSH_KEY }}
script: |
cd /var/www/backend
git pull origin main
npm install --production
npm run build
pm2 restart backend-api
# Backend deployment (Lambda)
deploy-backend-lambda:
name: Deploy Backend to Lambda
runs-on: ubuntu-latest
needs: deploy-frontend
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: ${{ env.NODE_VERSION }}
- name: Install dependencies
run: |
cd backend
npm ci
- name: Deploy to Lambda
run: |
cd backend
npm install -g serverless
serverless deploy --stage prod
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# Database migration
migrate-database:
name: Run Database Migrations
runs-on: ubuntu-latest
needs: deploy-backend-ec2
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: ${{ env.NODE_VERSION }}
- name: Run migrations
run: |
cd backend
npm ci
npm run migrate
env:
DB_HOST: ${{ secrets.DB_HOST }}
DB_PORT: ${{ secrets.DB_PORT }}
DB_NAME: ${{ secrets.DB_NAME }}
DB_USERNAME: ${{ secrets.DB_USERNAME }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
# Smoke tests
smoke-tests:
name: Run Smoke Tests
runs-on: ubuntu-latest
needs: [deploy-frontend, deploy-backend-ec2]
steps:
- name: Test frontend
run: |
curl -f https://yourdomain.com || exit 1
- name: Test backend health
run: |
curl -f https://api.yourdomain.com/health || exit 1
- name: Test API endpoint
run: |
response=$(curl -s https://api.yourdomain.com/api/users)
echo $response- Go to GitHub repository → Settings → Secrets and variables → Actions
- Click "New repository secret"
- Add secrets:
NETLIFY_AUTH_TOKEN = your_netlify_personal_access_token
NETLIFY_SITE_ID = your_netlify_site_id
EC2_HOST = 54.123.45.67
EC2_SSH_KEY = (paste your SSH private key)
AWS_ACCESS_KEY_ID = AKIA...
AWS_SECRET_ACCESS_KEY = your_secret_key
DB_HOST = your-db.xxxxx.rds.amazonaws.com
DB_PORT = 5432
DB_NAME = myapp_db
DB_USERNAME = dbadmin
DB_PASSWORD = your_db_password
REACT_APP_API_URL = https://api.yourdomain.com
Get Netlify tokens:
# Get Netlify personal access token
# Visit: https://app.netlify.com/user/applications#personal-access-tokens
# Get site ID
netlify sites:list
# Or from Netlify dashboard: Site settings → General → Site details → API IDCreate .gitlab-ci.yml:
image: node:20
stages:
- test
- build
- deploy
variables:
NODE_ENV: production
# Cache node modules
cache:
paths:
- frontend/node_modules/
- backend/node_modules/
# Test frontend
test-frontend:
stage: test
script:
- cd frontend
- npm ci
- npm test
only:
- main
- merge_requests
# Test backend
test-backend:
stage: test
script:
- cd backend
- npm ci
- npm test
only:
- main
- merge_requests
# Build frontend
build-frontend:
stage: build
script:
- cd frontend
- npm ci
- npm run build
artifacts:
paths:
- frontend/build/
only:
- main
# Deploy frontend to Netlify
deploy-frontend:
stage: deploy
image: node:20
before_script:
- npm install -g netlify-cli
script:
- cd frontend
- netlify deploy --prod --dir=build --auth=$NETLIFY_AUTH_TOKEN --site=$NETLIFY_SITE_ID
only:
- main
environment:
name: production
url: https://yourdomain.com
# Deploy backend to EC2
deploy-backend:
stage: deploy
image: alpine:latest
before_script:
- apk add --no-cache openssh-client
- eval $(ssh-agent -s)
- echo "$EC2_SSH_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan -H $EC2_HOST >> ~/.ssh/known_hosts
script:
- ssh ubuntu@$EC2_HOST "
cd /var/www/backend &&
git pull origin main &&
npm install --production &&
npm run build &&
pm2 restart backend-api
"
only:
- main
environment:
name: production
url: https://api.yourdomain.comAdd GitLab CI/CD Variables:
- Go to Project → Settings → CI/CD → Variables
- Add variables (same as GitHub secrets)
Create bitbucket-pipelines.yml:
image: node:20
definitions:
caches:
npm: ~/.npm
pipelines:
default:
- step:
name: Test Frontend
caches:
- npm
script:
- cd frontend
- npm ci
- npm test
- step:
name: Test Backend
caches:
- npm
script:
- cd backend
- npm ci
- npm test
branches:
main:
- step:
name: Build Frontend
caches:
- npm
script:
- cd frontend
- npm ci
- npm run build
artifacts:
- frontend/build/**
- step:
name: Deploy to Netlify
script:
- npm install -g netlify-cli
- cd frontend
- netlify deploy --prod --dir=build --auth=$NETLIFY_AUTH_TOKEN --site=$NETLIFY_SITE_ID
- step:
name: Deploy Backend to EC2
script:
- pipe: atlassian/ssh-run:0.4.1
variables:
SSH_USER: 'ubuntu'
SERVER: $EC2_HOST
SSH_KEY: $EC2_SSH_KEY
COMMAND: >
cd /var/www/backend &&
git pull origin main &&
npm install --production &&
npm run build &&
pm2 restart backend-apiFrontend (Jest + React Testing Library):
// src/components/Button.test.js
import { render, screen, fireEvent } from '@testing-library/react';
import Button from './Button';
test('renders button with text', () => {
render(<Button>Click me</Button>);
const button = screen.getByText(/click me/i);
expect(button).toBeInTheDocument();
});
test('calls onClick when clicked', () => {
const handleClick = jest.fn();
render(<Button onClick={handleClick}>Click me</Button>);
fireEvent.click(screen.getByText(/click me/i));
expect(handleClick).toHaveBeenCalledTimes(1);
});Backend (Jest):
// tests/api/users.test.js
const request = require('supertest');
const app = require('../app');
describe('GET /api/users', () => {
it('should return list of users', async () => {
const res = await request(app)
.get('/api/users')
.expect('Content-Type', /json/)
.expect(200);
expect(res.body).toHaveProperty('users');
expect(Array.isArray(res.body.users)).toBe(true);
});
});
describe('POST /api/auth/login', () => {
it('should login with valid credentials', async () => {
const res = await request(app)
.post('/api/auth/login')
.send({
email: '[email protected]',
password: 'password123'
})
.expect(200);
expect(res.body).toHaveProperty('token');
});
it('should reject invalid credentials', async () => {
const res = await request(app)
.post('/api/auth/login')
.send({
email: '[email protected]',
password: 'wrong'
})
.expect(401);
expect(res.body).toHaveProperty('error');
});
});// tests/integration/user-flow.test.js
const request = require('supertest');
const app = require('../app');
describe('User registration and login flow', () => {
let authToken;
it('should register new user', async () => {
const res = await request(app)
.post('/api/auth/register')
.send({
email: '[email protected]',
password: 'password123',
username: 'newuser'
})
.expect(201);
expect(res.body).toHaveProperty('user');
});
it('should login user', async () => {
const res = await request(app)
.post('/api/auth/login')
.send({
email: '[email protected]',
password: 'password123'
})
.expect(200);
authToken = res.body.token;
expect(authToken).toBeDefined();
});
it('should access protected route with token', async () => {
const res = await request(app)
.get('/api/profile')
.set('Authorization', `Bearer ${authToken}`)
.expect(200);
expect(res.body).toHaveProperty('email', '[email protected]');
});
});Install Playwright:
npm install -D @playwright/test
npx playwright installCreate tests/e2e/login.spec.js:
const { test, expect } = require('@playwright/test');
test.describe('Login flow', () => {
test('should login successfully', async ({ page }) => {
// Navigate to login page
await page.goto('https://yourdomain.com/login');
// Fill form
await page.fill('input[name="email"]', '[email protected]');
await page.fill('input[name="password"]', 'password123');
// Click login button
await page.click('button[type="submit"]');
// Wait for navigation
await page.waitForURL('https://yourdomain.com/dashboard');
// Verify logged in
await expect(page.locator('h1')).toContainText('Dashboard');
});
test('should show error for invalid credentials', async ({ page }) => {
await page.goto('https://yourdomain.com/login');
await page.fill('input[name="email"]', '[email protected]');
await page.fill('input[name="password"]', 'wrong');
await page.click('button[type="submit"]');
await expect(page.locator('.error')).toContainText('Invalid credentials');
});
});Run in CI:
- name: Run E2E tests
run: npx playwright testDefault metrics (5-minute intervals):
- CPUUtilization
- DiskReadOps, DiskWriteOps
- NetworkIn, NetworkOut
- StatusCheckFailed
Enable detailed monitoring (1-minute intervals):
# Via AWS CLI
aws ec2 monitor-instances --instance-ids i-1234567890abcdef0Install CloudWatch Agent for custom metrics:
# Download CloudWatch agent
wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
# Install
sudo dpkg -i amazon-cloudwatch-agent.deb
# Configure
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
# Start agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
-a fetch-config \
-m ec2 \
-s \
-c file:/opt/aws/amazon-cloudwatch-agent/bin/config.jsonCreate config manually (/opt/aws/amazon-cloudwatch-agent/bin/config.json):
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"metrics": {
"namespace": "MyApp",
"metrics_collected": {
"mem": {
"measurement": [
{
"name": "mem_used_percent",
"rename": "MemoryUtilization",
"unit": "Percent"
}
],
"metrics_collection_interval": 60
},
"disk": {
"measurement": [
{
"name": "used_percent",
"rename": "DiskUtilization",
"unit": "Percent"
}
],
"metrics_collection_interval": 60,
"resources": ["*"]
}
}
},
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/var/log/backend/app.log",
"log_group_name": "/backend/app",
"log_stream_name": "{instance_id}"
},
{
"file_path": "/var/log/nginx/error.log",
"log_group_name": "/nginx/error",
"log_stream_name": "{instance_id}"
}
]
}
}
}
}Enhanced Monitoring:
Enabled during RDS creation or:
aws rds modify-db-instance \
--db-instance-identifier my-app-db \
--monitoring-interval 60 \
--monitoring-role-arn arn:aws:iam::123456789012:role/rds-monitoring-rolePerformance Insights:
- Go to RDS → Databases → Select database
- Configuration → Monitoring → Enable Performance Insights
- Retention period: 7 days (free) or longer (paid)
View metrics:
- CPU utilization
- Database connections
- Read/Write IOPS
- Freeable memory
- Disk queue depth
Automatic metrics:
- Invocations
- Duration
- Errors
- Throttles
- Concurrent executions
Add custom metrics:
const AWS = require('aws-sdk');
const cloudwatch = new AWS.CloudWatch();
async function putMetric(metricName, value) {
await cloudwatch.putMetricData({
Namespace: 'MyApp/Lambda',
MetricData: [
{
MetricName: metricName,
Value: value,
Unit: 'Count',
Timestamp: new Date()
}
]
}).promise();
}
// Usage in Lambda
exports.handler = async (event) => {
await putMetric('CustomMetric', 1);
// ... rest of handler
};CPU Alarm (EC2):
aws cloudwatch put-metric-alarm \
--alarm-name high-cpu-utilization \
--alarm-description "Alarm when CPU exceeds 80%" \
--metric-name CPUUtilization \
--namespace AWS/EC2 \
--statistic Average \
--period 300 \
--threshold 80 \
--comparison-operator GreaterThanThreshold \
--evaluation-periods 2 \
--dimensions Name=InstanceId,Value=i-1234567890abcdef0 \
--alarm-actions arn:aws:sns:us-east-1:123456789012:admin-alertsDatabase Connections Alarm (RDS):
aws cloudwatch put-metric-alarm \
--alarm-name high-db-connections \
--metric-name DatabaseConnections \
--namespace AWS/RDS \
--statistic Average \
--period 60 \
--threshold 80 \
--comparison-operator GreaterThanThreshold \
--evaluation-periods 2 \
--dimensions Name=DBInstanceIdentifier,Value=my-app-db \
--alarm-actions arn:aws:sns:us-east-1:123456789012:admin-alertsLambda Errors Alarm:
aws cloudwatch put-metric-alarm \
--alarm-name lambda-errors \
--metric-name Errors \
--namespace AWS/Lambda \
--statistic Sum \
--period 60 \
--threshold 5 \
--comparison-operator GreaterThanThreshold \
--evaluation-periods 1 \
--dimensions Name=FunctionName,Value=my-function \
--alarm-actions arn:aws:sns:us-east-1:123456789012:admin-alertsCreate custom dashboard:
- Go to CloudWatch → Dashboards → Create dashboard
- Add widgets:
- Line graph (CPU, Memory, Network)
- Number (Current connections)
- Log insights query results
Via AWS CLI:
aws cloudwatch put-dashboard \
--dashboard-name MyAppDashboard \
--dashboard-body file://dashboard.jsondashboard.json:
{
"widgets": [
{
"type": "metric",
"properties": {
"metrics": [
["AWS/EC2", "CPUUtilization", {"stat": "Average"}]
],
"period": 300,
"stat": "Average",
"region": "us-east-1",
"title": "EC2 CPU Utilization"
}
}
]
}Install Winston:
npm install winstonCreate logger.js:
const winston = require('winston');
const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
defaultMeta: {
service: 'backend-api',
environment: process.env.NODE_ENV
},
transports: [
// Console output
new winston.transports.Console({
format: winston.format.combine(
winston.format.colorize(),
winston.format.simple()
)
}),
// File output - errors
new winston.transports.File({
filename: '/var/log/backend/error.log',
level: 'error',
maxsize: 10485760, // 10MB
maxFiles: 5
}),
// File output - all logs
new winston.transports.File({
filename: '/var/log/backend/combined.log',
maxsize: 10485760,
maxFiles: 10
})
]
});
module.exports = logger;Use in application:
const logger = require('./logger');
// Info log
logger.info('User logged in', { userId: 123, email: '[email protected]' });
// Error log
logger.error('Database connection failed', { error: err.message, stack: err.stack });
// Warning
logger.warn('High memory usage', { memoryUsage: process.memoryUsage() });
// Debug (only in development)
logger.debug('API request received', { method: 'GET', path: '/api/users' });const logger = require('./logger');
function requestLogger(req, res, next) {
const start = Date.now();
res.on('finish', () => {
const duration = Date.now() - start;
logger.info('HTTP Request', {
method: req.method,
url: req.url,
statusCode: res.statusCode,
duration: `${duration}ms`,
userAgent: req.get('user-agent'),
ip: req.ip,
userId: req.user?.id
});
});
next();
}
app.use(requestLogger);function errorLogger(err, req, res, next) {
logger.error('Unhandled error', {
error: err.message,
stack: err.stack,
url: req.url,
method: req.method,
body: req.body,
userId: req.user?.id
});
res.status(500).json({
error: 'Internal server error',
message: process.env.NODE_ENV === 'development' ? err.message : undefined
});
}
app.use(errorLogger);Query examples:
Find errors in last hour:
fields @timestamp, @message, error
| filter @message like /ERROR/
| sort @timestamp desc
| limit 100
Count requests by status code:
fields statusCode
| stats count() by statusCode
| sort count desc
Average response time:
fields duration
| stats avg(duration) as avg_duration, max(duration) as max_duration
| sort avg_duration desc
Find slow queries:
fields @timestamp, url, duration
| filter duration > 1000
| sort duration desc
| limit 50
Install on EC2:
# Install Java
sudo apt install -y openjdk-11-jdk
# Add Elastic repository
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
# Install Elasticsearch
sudo apt update
sudo apt install -y elasticsearch
# Configure Elasticsearch
sudo nano /etc/elasticsearch/elasticsearch.yml
# Set: network.host: localhost
# Start Elasticsearch
sudo systemctl start elasticsearch
sudo systemctl enable elasticsearch
# Install Kibana
sudo apt install -y kibana
# Configure Kibana
sudo nano /etc/kibana/kibana.yml
# Set: server.host: "0.0.0.0"
# Start Kibana
sudo systemctl start kibana
sudo systemctl enable kibana
# Install Logstash
sudo apt install -y logstash
# Create Logstash config
sudo nano /etc/logstash/conf.d/backend.confLogstash config:
input {
file {
path => "/var/log/backend/combined.log"
codec => "json"
}
}
filter {
# Parse and enrich logs
if [level] == "error" {
mutate {
add_tag => ["error"]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "backend-logs-%{+YYYY.MM.dd}"
}
}
Start Logstash:
sudo systemctl start logstash
sudo systemctl enable logstashAccess Kibana: http://YOUR_EC2_IP:5601
Configure during creation or modify:
aws rds modify-db-instance \
--db-instance-identifier my-app-db \
--backup-retention-period 7 \
--preferred-backup-window "03:00-04:00" \
--apply-immediatelySettings:
- Retention: 1-35 days (7-14 recommended for production)
- Backup window: Low-traffic time
- Point-in-time recovery: Automatically enabled
Create snapshot:
aws rds create-db-snapshot \
--db-instance-identifier my-app-db \
--db-snapshot-identifier my-app-db-snapshot-$(date +%Y%m%d)Restore from snapshot:
aws rds restore-db-instance-from-db-snapshot \
--db-instance-identifier my-app-db-restored \
--db-snapshot-identifier my-app-db-snapshot-20250119PostgreSQL:
# Export database
pg_dump -h my-app-db.xxxxx.rds.amazonaws.com \
-U dbadmin \
-d myapp_db \
-F c \
-f backup-$(date +%Y%m%d).dump
# Upload to S3
aws s3 cp backup-$(date +%Y%m%d).dump s3://my-backups/database/
# Automated script
#!/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_FILE="backup-$DATE.dump"
pg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME -F c -f $BACKUP_FILE
aws s3 cp $BACKUP_FILE s3://my-backups/database/
rm $BACKUP_FILE
# Delete old backups (keep 30 days)
aws s3 ls s3://my-backups/database/ | while read -r line; do
createDate=$(echo $line | awk {'print $1" "$2'})
createDate=$(date -d "$createDate" +%s)
olderThan=$(date -d "30 days ago" +%s)
if [[ $createDate -lt $olderThan ]]; then
fileName=$(echo $line | awk {'print $4'})
aws s3 rm s3://my-backups/database/$fileName
fi
doneSchedule with cron:
crontab -e
# Daily backup at 2 AM
0 2 * * * /home/ubuntu/scripts/backup-db.shCreate AMI:
aws ec2 create-image \
--instance-id i-1234567890abcdef0 \
--name "backend-ami-$(date +%Y%m%d)" \
--description "Backend server backup" \
--no-rebootLaunch from AMI:
aws ec2 run-instances \
--image-id ami-xxxxxxxxx \
--instance-type t3.small \
--key-name my-backend-key \
--security-group-ids sg-xxxxx \
--subnet-id subnet-xxxxxCreate snapshot:
aws ec2 create-snapshot \
--volume-id vol-xxxxxxxxx \
--description "Backend volume backup $(date +%Y%m%d)"Automated snapshots with Data Lifecycle Manager:
- Go to EC2 → Lifecycle Manager → Create policy
- Configure:
- Resource type: Volume
- Target tags: Environment=production
- Schedule: Daily at 2:00 AM
- Retention: 7 days
Git repository (already backed up):
- GitHub/GitLab/Bitbucket provides automatic backups
- Ensure code is committed and pushed regularly
Environment files backup:
# Backup .env and configs to S3
aws s3 cp /var/www/backend/.env s3://my-backups/configs/backend-env-$(date +%Y%m%d)
aws s3 cp /etc/nginx/sites-available/backend s3://my-backups/configs/nginx-config-$(date +%Y%m%d)
# Encrypt sensitive files
gpg -c /var/www/backend/.env
aws s3 cp /var/www/backend/.env.gpg s3://my-backups/configs/Define targets:
- RTO: Maximum acceptable downtime (e.g., 4 hours)
- RPO: Maximum acceptable data loss (e.g., 1 hour)
Database replication:
# Create read replica in different region
aws rds create-db-instance-read-replica \
--db-instance-identifier my-app-db-replica \
--source-db-instance-identifier my-app-db \
--source-region us-east-1 \
--region eu-west-1S3 cross-region replication:
Enable versioning and replication:
# Enable versioning on source bucket
aws s3api put-bucket-versioning \
--bucket my-app-uploads \
--versioning-configuration Status=Enabled
# Create replication configuration
{
"Role": "arn:aws:iam::123456789012:role/s3-replication-role",
"Rules": [
{
"Status": "Enabled",
"Priority": 1,
"Filter": {},
"Destination": {
"Bucket": "arn:aws:s3:::my-app-uploads-replica",
"ReplicationTime": {
"Status": "Enabled",
"Time": {
"Minutes": 15
}
}
}
}
]
}Monthly recovery drill:
- Restore database from snapshot to test instance
- Verify data integrity
- Test application connectivity
- Document recovery time
- Update procedures if needed
Checklist:
□ Database restored successfully
□ All tables present
□ Data up-to-date (within RPO)
□ Application connects to restored DB
□ All features work correctly
□ Performance acceptable
□ Security configurations correct
□ Recovery time within RTO
React (using React.lazy):
import React, { lazy, Suspense } from 'react';
// Lazy load components
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Profile = lazy(() => import('./pages/Profile'));
function App() {
return (
<Suspense fallback={<div>Loading...</div>}>
<Routes>
<Route path="/dashboard" element={<Dashboard />} />
<Route path="/profile" element={<Profile />} />
</Routes>
</Suspense>
);
}Next.js (automatic):
Next.js automatically code-splits each page.
Manual optimization:
import dynamic from 'next/dynamic';
const HeavyComponent = dynamic(() => import('../components/HeavyComponent'), {
loading: () => <p>Loading...</p>,
ssr: false // Disable server-side rendering for this component
});Use Next.js Image component:
import Image from 'next/image';
<Image
src="/hero.jpg"
alt="Hero image"
width={1200}
height={600}
priority // Preload important images
placeholder="blur" // Show blur while loading
/>Lazy load images (vanilla JS):
<img
src="placeholder.jpg"
data-src="actual-image.jpg"
loading="lazy"
alt="Description"
/>Use WebP format:
# Convert images to WebP
for img in *.jpg; do
cwebp -q 80 "$img" -o "${img%.jpg}.webp"
doneServe with picture element:
<picture>
<source srcset="image.webp" type="image/webp">
<source srcset="image.jpg" type="image/jpeg">
<img src="image.jpg" alt="Fallback">
</picture>Netlify automatic optimization:
Enable in Site configuration → Build & deploy → Post processing:
- Bundle CSS
- Minify CSS
- Minify JS
- Compress images
Manual compression (gzip/brotli):
Already configured in Netlify. For custom server:
# Nginx gzip configuration
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript application/json application/javascript;
# Brotli (requires module)
brotli on;
brotli_types text/plain text/css text/xml text/javascript application/json;Service Worker (PWA):
// service-worker.js
const CACHE_NAME = 'my-app-v1';
const urlsToCache = [
'/',
'/static/css/main.css',
'/static/js/main.js'
];
self.addEventListener('install', (event) => {
event.waitUntil(
caches.open(CACHE_NAME)
.then((cache) => cache.addAll(urlsToCache))
);
});
self.addEventListener('fetch', (event) => {
event.respondWith(
caches.match(event.request)
.then((response) => response || fetch(event.request))
);
});Browser caching headers (Netlify):
[[headers]]
for = "/*.js"
[headers.values]
Cache-Control = "public, max-age=31536000, immutable"
[[headers]]
for = "/*.css"
[headers.values]
Cache-Control = "public, max-age=31536000, immutable"
[[headers]]
for = "/images/*"
[headers.values]
Cache-Control = "public, max-age=2592000" # 30 daysAdd indexes:
-- Find missing indexes
SELECT schemaname, tablename, indexname
FROM pg_indexes
WHERE schemaname = 'public';
-- Create indexes on frequently queried columns
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_posts_user_id ON posts(user_id);
CREATE INDEX idx_posts_created_at ON posts(created_at DESC);
-- Composite index for multiple columns
CREATE INDEX idx_posts_user_date ON posts(user_id, created_at DESC);
-- Partial index (only for specific condition)
CREATE INDEX idx_published_posts ON posts(created_at) WHERE published = true;Use query explain:
EXPLAIN ANALYZE SELECT * FROM posts WHERE user_id = 123 ORDER BY created_at DESC LIMIT 10;Optimize N+1 queries:
Bad (N+1):
// Fetches users, then for each user fetches posts (N queries)
const users = await User.findAll();
for (const user of users) {
user.posts = await Post.findAll({ where: { userId: user.id } });
}Good (eager loading):
// Single query with JOIN
const users = await User.findAll({
include: [{ model: Post }]
});Redis caching:
Install Redis:
npm install redisImplement caching:
const redis = require('redis');
const client = redis.createClient({
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASSWORD
});
// Cache middleware
async function cacheMiddleware(req, res, next) {
const key = `cache:${req.url}`;
try {
const cached = await client.get(key);
if (cached) {
return res.json(JSON.parse(cached));
}
next();
} catch (err) {
next();
}
}
// Route with caching
app.get('/api/users', cacheMiddleware, async (req, res) => {
const users = await User.findAll();
// Cache for 5 minutes
await client.setex(`cache:${req.url}`, 300, JSON.stringify(users));
res.json(users);
});In-memory caching (Node.js):
const NodeCache = require('node-cache');
const cache = new NodeCache({ stdTTL: 300 }); // 5 minutes TTL
function getCachedData(key, fetchFn) {
const cached = cache.get(key);
if (cached) {
return Promise.resolve(cached);
}
return fetchFn().then((data) => {
cache.set(key, data);
return data;
});
}
// Usage
app.get('/api/users', async (req, res) => {
const users = await getCachedData('users', () => User.findAll());
res.json(users);
});PostgreSQL (pg):
const { Pool } = require('pg');
const pool = new Pool({
host: process.env.DB_HOST,
port: 5432,
database: process.env.DB_NAME,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
max: 20, // Maximum connections
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
});
// Use pool instead of creating new clients
pool.query('SELECT * FROM users', (err, res) => {
console.log(res.rows);
});Use job queues for heavy tasks:
Install Bull:
npm install bullCreate queue:
const Queue = require('bull');
const emailQueue = new Queue('email', {
redis: {
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT
}
});
// Add job to queue
app.post('/api/send-email', async (req, res) => {
await emailQueue.add({
to: req.body.email,
subject: 'Welcome',
body: 'Welcome to our app!'
});
res.json({ message: 'Email queued' });
});
// Process jobs
emailQueue.process(async (job) => {
await sendEmail(job.data);
});Netlify CDN (automatic):
Netlify automatically:
- Serves assets from 200+ edge locations
- Compresses assets (gzip/brotli)
- Caches static files
- Provides instant cache invalidation
CloudFront for API (optional):
- Create CloudFront distribution
- Origin: API Gateway or Load Balancer
- Cache behavior:
- Cache GET requests
- Forward headers: Authorization
- TTL: 0-300 seconds
- Use CloudFront URL or custom domain
Application Load Balancer health checks:
# Configure health check
aws elbv2 modify-target-group \
--target-group-arn arn:aws:elasticloadbalancing:... \
--health-check-enabled \
--health-check-path /health \
--health-check-interval-seconds 30 \
--health-check-timeout-seconds 5 \
--healthy-threshold-count 2 \
--unhealthy-threshold-count 3Auto Scaling Group:
# Create launch template
aws ec2 create-launch-template \
--launch-template-name backend-template \
--version-description "Backend v1" \
--launch-template-data '{
"ImageId": "ami-xxxxxxxxx",
"InstanceType": "t3.small",
"KeyName": "my-backend-key",
"SecurityGroupIds": ["sg-xxxxx"],
"UserData": "base64-encoded-startup-script"
}'
# Create Auto Scaling Group
aws autoscaling create-auto-scaling-group \
--auto-scaling-group-name backend-asg \
--launch-template LaunchTemplateName=backend-template \
--min-size 2 \
--max-size 10 \
--desired-capacity 2 \
--target-group-arns arn:aws:elasticloadbalancing:... \
--health-check-type ELB \
--health-check-grace-period 300 \
--vpc-zone-identifier "subnet-xxxxx,subnet-yyyyy"
# Create scaling policy
aws autoscaling put-scaling-policy \
--auto-scaling-group-name backend-asg \
--policy-name scale-on-cpu \
--policy-type TargetTrackingScaling \
--target-tracking-configuration '{
"PredefinedMetricSpecification": {
"PredefinedMetricType": "ASGAverageCPUUtilization"
},
"TargetValue": 70.0
}'Netlify configuration (netlify.toml):
[[headers]]
for = "/*"
[headers.values]
Content-Security-Policy = """
default-src 'self';
script-src 'self' 'unsafe-inline' 'unsafe-eval' https://cdn.jsdelivr.net;
style-src 'self' 'unsafe-inline' https://fonts.googleapis.com;
font-src 'self' https://fonts.gstatic.com;
img-src 'self' data: https: blob:;
connect-src 'self' https://api.yourdomain.com;
frame-ancestors 'none';
base-uri 'self';
form-action 'self';
"""React/Next.js meta tag:
<Head>
<meta httpEquiv="Content-Security-Policy" content="default-src 'self'; script-src 'self' 'unsafe-inline';" />
</Head>Sanitize user input:
npm install dompurifyimport DOMPurify from 'dompurify';
function SafeContent({ html }) {
return <div dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(html) }} />;
}Escape output:
function escapeHtml(unsafe) {
return unsafe
.replace(/&/g, "&")
.replace(/</g, "<")
.replace(/>/g, ">")
.replace(/"/g, """)
.replace(/'/g, "'");
}Use CSRF tokens:
// Backend: Generate token
const csrf = require('csurf');
const csrfProtection = csrf({ cookie: true });
app.get('/api/form', csrfProtection, (req, res) => {
res.json({ csrfToken: req.csrfToken() });
});
app.post('/api/submit', csrfProtection, (req, res) => {
// Process form
});
// Frontend: Include token
const response = await fetch('/api/form');
const { csrfToken } = await response.json();
await fetch('/api/submit', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'CSRF-Token': csrfToken
},
body: JSON.stringify(data)
});X-Frame-Options header:
[[headers]]
for = "/*"
[headers.values]
X-Frame-Options = "DENY"
# or "SAMEORIGIN" to allow same-origin framingUse validation library:
npm install joiconst Joi = require('joi');
const userSchema = Joi.object({
email: Joi.string().email().required(),
password: Joi.string().min(8).required(),
username: Joi.string().alphanum().min(3).max(30).required()
});
app.post('/api/register', async (req, res) => {
try {
const value = await userSchema.validateAsync(req.body);
// Process validated data
} catch (err) {
return res.status(400).json({ error: err.details[0].message });
}
});Use parameterized queries:
// ✗ BAD - Vulnerable to SQL injection
const query = `SELECT * FROM users WHERE email = '${email}'`;
// ✓ GOOD - Parameterized query
const query = 'SELECT * FROM users WHERE email = $1';
const result = await pool.query(query, [email]);
// ✓ GOOD - ORM (Sequelize)
const user = await User.findOne({ where: { email: email } });Hash passwords with bcrypt:
npm install bcryptconst bcrypt = require('bcrypt');
// Hash password on registration
async function hashPassword(password) {
const saltRounds = 10;
return await bcrypt.hash(password, saltRounds);
}
// Verify password on login
async function verifyPassword(password, hash) {
return await bcrypt.compare(password, hash);
}
// Usage
app.post('/api/register', async (req, res) => {
const { email, password } = req.body;
const hashedPassword = await hashPassword(password);
await User.create({ email, password: hashedPassword });
res.json({ message: 'User created' });
});
app.post('/api/login', async (req, res) => {
const { email, password } = req.body;
const user = await User.findOne({ where: { email } });
if (!user || !(await verifyPassword(password, user.password))) {
return res.status(401).json({ error: 'Invalid credentials' });
}
// Generate JWT token
const token = jwt.sign({ userId: user.id }, process.env.JWT_SECRET);
res.json({ token });
});Secure JWT implementation:
const jwt = require('jsonwebtoken');
// Generate token
function generateToken(user) {
return jwt.sign(
{
userId: user.id,
email: user.email
},
process.env.JWT_SECRET,
{
expiresIn: '7d',
issuer: 'yourdomain.com',
audience: 'yourdomain.com'
}
);
}
// Verify token middleware
function authenticateToken(req, res, next) {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
if (!token) {
return res.status(401).json({ error: 'No token provided' });
}
jwt.verify(token, process.env.JWT_SECRET, (err, decoded) => {
if (err) {
return res.status(403).json({ error: 'Invalid token' });
}
req.user = decoded;
next();
});
}
// Protected route
app.get('/api/profile', authenticateToken, async (req, res) => {
const user = await User.findByPk(req.user.userId);
res.json(user);
});Express rate limiter:
npm install express-rate-limitconst rateLimit = require('express-rate-limit');
// General rate limiter
const generalLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // Limit each IP to 100 requests per windowMs
message: 'Too many requests, please try again later',
standardHeaders: true,
legacyHeaders: false,
});
// Login rate limiter (stricter)
const loginLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 5, // 5 login attempts per 15 minutes
skipSuccessfulRequests: true,
message: 'Too many login attempts, please try again later'
});
// Apply to routes
app.use('/api/', generalLimiter);
app.post('/api/login', loginLimiter, loginHandler);Redirect HTTP to HTTPS:
// Express middleware
function requireHTTPS(req, res, next) {
if (req.secure || req.headers['x-forwarded-proto'] === 'https') {
return next();
}
res.redirect('https://' + req.headers.host + req.url);
}
app.use(requireHTTPS);Nginx:
server {
listen 80;
server_name api.yourdomain.com;
return 301 https://$server_name$request_uri;
}Regular audits:
# Check for vulnerabilities
npm audit
# Fix vulnerabilities
npm audit fix
# Update dependencies
npm update
# Check outdated packages
npm outdatedUse Snyk for continuous monitoring:
npm install -g snyk
snyk auth
snyk test # Test for vulnerabilities
snyk monitor # Continuous monitoringPrinciple of least privilege:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query"
],
"Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/MyTable"
}
]
}Enable MFA:
- Go to IAM → Users → Select user
- Security credentials → Assigned MFA device → Manage
- Choose Virtual MFA device
- Scan QR code with authenticator app
- Enter two consecutive codes
Use IAM roles instead of access keys:
// No need to configure credentials
const AWS = require('aws-sdk');
const s3 = new AWS.S3(); // Automatically uses IAM roleRestrict SSH access:
# Allow SSH only from your IP
aws ec2 authorize-security-group-ingress \
--group-id sg-xxxxx \
--protocol tcp \
--port 22 \
--cidr YOUR_IP/32Minimum required ports:
Port 22: SSH (from your IP only)
Port 80: HTTP (0.0.0.0/0)
Port 443: HTTPS (0.0.0.0/0)
Port 3000-8000: Application ports (from ALB security group only)
Use AWS Secrets Manager:
const AWS = require('aws-sdk');
const secretsManager = new AWS.SecretsManager({ region: 'us-east-1' });
async function getSecret(secretName) {
const data = await secretsManager.getSecretValue({
SecretId: secretName
}).promise();
return JSON.parse(data.SecretString);
}
// Usage
const secrets = await getSecret('prod/backend/db');
const dbPassword = secrets.password;Rotate secrets regularly:
- Go to Secrets Manager → Select secret
- Rotation configuration → Edit rotation
- Enable automatic rotation
- Choose rotation Lambda function
- Set rotation schedule (30-90 days)
Private subnets for databases:
# Create private subnet
aws ec2 create-subnet \
--vpc-id vpc-xxxxx \
--cidr-block 10.0.2.0/24 \
--availability-zone us-east-1a
# Place RDS in private subnet (no internet access)
# Only accessible from EC2/Lambda in same VPCNetwork ACLs:
# Create network ACL
aws ec2 create-network-acl --vpc-id vpc-xxxxx
# Add rules
aws ec2 create-network-acl-entry \
--network-acl-id acl-xxxxx \
--rule-number 100 \
--protocol tcp \
--port-range From=443,To=443 \
--cidr-block 0.0.0.0/0 \
--egress \
--rule-action allowTrack configuration changes:
- Go to AWS Config → Get started
- Select resources to record
- Choose S3 bucket for storing configurations
- Create SNS topic for notifications
- Confirm
Threat detection:
- Go to GuardDuty → Get started
- Enable GuardDuty
- Configure findings export to S3
- Set up CloudWatch Events for alerts
Issue: "Module not found"
Solution:
# Clear cache and reinstall
rm -rf node_modules package-lock.json
npm install
# Or use npm ci for clean install
npm ciIssue: "Out of memory during build"
Solution:
# Increase Node memory
NODE_OPTIONS="--max-old-space-size=4096" npm run build
# Or in package.json
"scripts": {
"build": "NODE_OPTIONS='--max-old-space-size=4096' react-scripts build"
}Issue: Environment variables not working
Solution:
- Ensure variables start with
REACT_APP_,NEXT_PUBLIC_, orVUE_APP_ - Rebuild after adding new variables
- Check Netlify environment variables are set correctly
- Restart development server after changes
Issue: 404 on page refresh (SPA)
Solution:
# Add to netlify.toml
[[redirects]]
from = "/*"
to = "/index.html"
status = 200Issue: Assets not loading
Solution:
- Check
PUBLIC_URLorpublicPathconfiguration - Verify asset paths are relative
- Check browser console for CORS errors
- Ensure assets are in
public/orstatic/folder
Issue: Deploy preview not updating
Solution:
- Clear Netlify cache: Deploys → Trigger deploy → Clear cache and deploy site
- Check build logs for errors
- Verify git branch is correct
Issue: CORS errors
Solution:
// Backend: Add CORS headers
app.use(cors({
origin: 'https://yourdomain.com',
credentials: true
}));
// Frontend: Include credentials
fetch('https://api.yourdomain.com/api/users', {
credentials: 'include'
});Issue: "Failed to fetch" or "Network error"
Solution:
- Check API URL in environment variables
- Verify backend is running and accessible
- Check browser console for specific error
- Test API with curl:
curl -v https://api.yourdomain.com/health
- Check SSL certificate is valid
Issue: Cannot SSH into EC2
Solution:
# Check security group allows SSH from your IP
aws ec2 describe-security-groups --group-ids sg-xxxxx
# Verify key permissions
chmod 400 ~/.ssh/my-key.pem
# Check instance is running
aws ec2 describe-instances --instance-ids i-xxxxx
# Use EC2 Instance Connect (browser-based)
# AWS Console → EC2 → Instance → ConnectIssue: "Connection timeout"
Solution:
- Check security group inbound rules
- Verify instance has public IP
- Check Network ACLs
- Verify route table has internet gateway
Issue: Application not starting
Solution:
# Check application logs
pm2 logs backend-api
# Or systemd logs
sudo journalctl -u backend -n 100 --no-pager
# Check port is not already in use
sudo lsof -i :3000
# Kill process on port
sudo kill -9 $(sudo lsof -t -i:3000)
# Check environment variables
printenv | grep DB_Issue: High memory usage
Solution:
# Check memory usage
free -h
# Restart application
pm2 restart backend-api
# Increase swap space
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Make permanent
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstabIssue: Database connection errors
Solution:
# Test database connection
psql -h your-db.rds.amazonaws.com -U dbadmin -d myapp_db
# Check security group allows connection
# Verify database is running
# Check credentials in .env file
# Test network connectivity
telnet your-db.rds.amazonaws.com 5432Issue: "502 Bad Gateway"
Solution:
# Check backend is running
curl http://localhost:3000/health
# Check Nginx error logs
sudo tail -f /var/log/nginx/error.log
# Test Nginx configuration
sudo nginx -t
# Restart Nginx
sudo systemctl restart nginx
# Check Nginx status
sudo systemctl status nginxIssue: "413 Request Entity Too Large"
Solution:
# Increase client_max_body_size in Nginx
http {
client_max_body_size 50M;
}
# Or in server block
server {
client_max_body_size 50M;
}Issue: SSL certificate errors
Solution:
# Renew certificate
sudo certbot renew
# Check certificate expiration
sudo certbot certificates
# Force renewal
sudo certbot renew --force-renewal
# Restart Nginx after renewal
sudo systemctl reload nginxIssue: "Task timed out after X seconds"
Solution:
- Increase timeout in serverless.yml:
functions: api: timeout: 30 # Maximum 900 seconds (15 minutes)
- Optimize code to run faster
- Use Lambda layers for dependencies
- Consider switching to EC2 for long-running tasks
Issue: Cold start latency
Solution:
- Use provisioned concurrency
- Reduce package size
- Use Lambda layers
- Keep Lambda warm with scheduled pings
Issue: "Missing IAM permissions"
Solution:
# Add IAM permissions in serverless.yml
provider:
iam:
role:
statements:
- Effect: Allow
Action:
- s3:GetObject
Resource: "arn:aws:s3:::my-bucket/*"Issue: "Cannot connect to database"
Solution:
- Check security group allows connections from your IP/EC2
- Verify database is publicly accessible (if needed)
- Check VPC and subnet configuration
- Test with psql/mysql client
- Verify credentials
Issue: "Too many connections"
Solution:
-- Check current connections
SELECT count(*) FROM pg_stat_activity;
-- Kill idle connections
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE state = 'idle'
AND state_change < current_timestamp - INTERVAL '10 minutes';
-- Increase max_connections (requires restart)
ALTER SYSTEM SET max_connections = 200;Or modify RDS parameter group:
- Go to RDS → Parameter groups
- Edit parameter group
- Change
max_connectionsto higher value - Reboot instance
Issue: Slow query performance
Solution:
-- Find slow queries
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;
-- Add missing indexes
CREATE INDEX idx_column ON table(column);
-- Analyze tables
ANALYZE table_name;
-- Vacuum tables
VACUUM ANALYZE;Issue: "ProvisionedThroughputExceededException"
Solution:
- Switch to on-demand capacity mode
- Increase provisioned capacity
- Implement exponential backoff retry logic
- Use DynamoDB auto scaling
Issue: High costs
Solution:
- Use on-demand for unpredictable workloads
- Use provisioned capacity for predictable workloads
- Implement DynamoDB auto scaling
- Archive old data to S3
- Use DynamoDB Standard-IA for infrequent access
CPU alarm:
aws cloudwatch put-metric-alarm \
--alarm-name high-cpu \
--alarm-description "CPU > 80%" \
--metric-name CPUUtilization \
--namespace AWS/EC2 \
--statistic Average \
--period 300 \
--threshold 80 \
--comparison-operator GreaterThanThreshold \
--evaluation-periods 2 \
--dimensions Name=InstanceId,Value=i-xxxxx \
--alarm-actions arn:aws:sns:us-east-1:123456789012:alertsDisk usage alarm:
aws cloudwatch put-metric-alarm \
--alarm-name high-disk-usage \
--metric-name DiskSpaceUtilization \
--namespace System/Linux \
--statistic Average \
--period 300 \
--threshold 85 \
--comparison-operator GreaterThanThreshold \
--evaluation-periods 1# Create SNS topic
aws sns create-topic --name server-alerts
# Subscribe email
aws sns subscribe \
--topic-arn arn:aws:sns:us-east-1:123456789012:server-alerts \
--protocol email \
--notification-endpoint [email protected]
# Confirm subscription via emailFree tier includes:
- 100 GB bandwidth/month
- 300 build minutes/month
- Unlimited sites
- Deploy previews
Optimization:
- Use image optimization
- Enable asset compression
- Leverage CDN caching
- Monitor bandwidth usage
Upgrade when needed:
- Pro plan: $19/month (more bandwidth)
- Business plan: $99/month (SSO, analytics)
Use Reserved Instances:
# 30-60% savings for 1-3 year commitment
aws ec2 purchase-reserved-instances-offering \
--reserved-instances-offering-id offering-id \
--instance-count 1Use Spot Instances (for non-critical workloads):
# Up to 90% savings
aws ec2 request-spot-instances \
--spot-price "0.05" \
--instance-count 1 \
--type "one-time" \
--launch-specification file://specification.jsonRight-sizing:
- Monitor CPU/memory usage
- Downsize if consistently < 40% utilization
- Use T3/T4g instances (burstable performance)
Stop instances when not needed:
# Stop instance (dev/test environments)
aws ec2 stop-instances --instance-ids i-xxxxx
# Start instance
aws ec2 start-instances --instance-ids i-xxxxxUse Reserved Instances:
- 1-year: ~35% savings
- 3-year: ~60% savings
Stop dev/test databases:
aws rds stop-db-instance --db-instance-identifier dev-dbUse Aurora Serverless (for variable workloads):
- Pay per second
- Auto-scales based on demand
- Can pause when not in use
Optimize storage:
- Use gp3 instead of gp2 (20% cheaper)
- Enable storage auto-scaling
- Archive old data
Optimize memory allocation:
- More memory = more CPU = faster execution
- Test different memory sizes
- Use AWS Lambda Power Tuning tool
Reduce package size:
- Remove unused dependencies
- Use Lambda layers for common code
- Tree-shake dependencies
Use reserved concurrency carefully:
- Only for critical functions
- Costs $0.000012 per GB-second
Minimize inter-region transfer:
- Keep resources in same region
- Use CloudFront for global distribution
Use VPC endpoints:
- Access S3/DynamoDB without internet gateway
- Avoid data transfer charges
Compress data:
- Enable gzip/brotli compression
- Reduce payload sizes
Use AWS Cost Explorer:
- Go to Billing → Cost Explorer
- View costs by service, region, tag
- Set up cost anomaly detection
- Create cost budgets
Set up billing alarms:
aws cloudwatch put-metric-alarm \
--alarm-name billing-alarm \
--metric-name EstimatedCharges \
--namespace AWS/Billing \
--statistic Maximum \
--period 21600 \
--threshold 100 \
--comparison-operator GreaterThanThreshold \
--evaluation-periods 1 \
--dimensions Name=Currency,Value=USD \
--alarm-actions arn:aws:sns:us-east-1:123456789012:billing-alertsUse AWS Budgets:
- Go to Billing → Budgets → Create budget
- Budget type: Cost budget
- Set amount: $100/month
- Configure alerts at 80% and 100%
Auto Scaling Groups (EC2):
# Create scaling policy based on CPU
aws autoscaling put-scaling-policy \
--auto-scaling-group-name backend-asg \
--policy-name scale-on-cpu \
--policy-type TargetTrackingScaling \
--target-tracking-configuration '{
"PredefinedMetricSpecification": {
"PredefinedMetricType": "ASGAverageCPUUtilization"
},
"TargetValue": 70.0
}'Load Balancer distribution:
- Round robin
- Least connections
- IP hash
Upgrade instance type:
# Stop instance
aws ec2 stop-instances --instance-ids i-xxxxx
# Modify instance type
aws ec2 modify-instance-attribute \
--instance-id i-xxxxx \
--instance-type t3.medium
# Start instance
aws ec2 start-instances --instance-ids i-xxxxxRead replicas:
aws rds create-db-instance-read-replica \
--db-instance-identifier myapp-db-replica \
--source-db-instance-identifier myapp-db \
--availability-zone us-east-1bConnection pooling:
- Use pgBouncer or RDS Proxy
- Reduce connection overhead
Caching layer:
- Redis/ElastiCache for frequently accessed data
- Reduce database load
CloudFront for API:
- Cache GET requests
- Reduce backend load
- Global distribution
Application-level caching:
- Redis/Memcached
- In-memory caching
- CDN edge caching
□ Backup all data (database, files, configurations)
□ Document current architecture
□ Test deployment process in staging
□ Review DNS TTL (set to 300 seconds)
□ Prepare rollback plan
□ Notify users of potential downtime
□ Schedule during low-traffic period
□ Deploy backend to AWS
□ Test backend endpoints
□ Update DNS records (point to new backend)
□ Deploy frontend to Netlify
□ Update environment variables
□ Test frontend-backend integration
□ Monitor error logs
□ Verify SSL certificates
□ Test all critical features
□ Update documentation
□ Monitor application performance
□ Check error rates
□ Verify database connections
□ Test backup/restore procedures
□ Update monitoring dashboards
□ Review cost reports
□ Optimize based on metrics
□ Document lessons learned
□ Update runbooks
□ Train team on new infrastructure
1. Keep old infrastructure running for 7-14 days
2. DNS can be reverted quickly (5-minute TTL)
3. Database can be restored from backup
4. Have automation scripts ready for rollback
5. Document rollback procedures
Security:
✓ HTTPS enabled (frontend and backend)
✓ SSL certificates configured
✓ CORS configured correctly
✓ Rate limiting implemented
✓ Input validation on all endpoints
✓ Passwords hashed (bcrypt)
✓ JWT tokens secured
✓ Security headers configured
✓ Secrets stored securely (AWS Secrets Manager)
✓ IAM roles follow least privilege
✓ Security groups properly configured
✓ Database in private subnet (if applicable)
Performance:
✓ CDN configured (Netlify automatic)
✓ Asset compression enabled
✓ Images optimized
✓ Database indexed properly
✓ Connection pooling configured
✓ Caching implemented (Redis/CloudFront)
✓ Auto-scaling configured
✓ Load balancer health checks working
Monitoring:
✓ CloudWatch alarms configured
✓ Error tracking setup (Sentry)
✓ Uptime monitoring active
✓ Log aggregation configured
✓ Performance monitoring enabled
✓ Cost alerts configured
✓ Backup alerts configured
Reliability:
✓ Automated backups enabled
✓ Backup restoration tested
✓ Multi-AZ deployment (production)
✓ Health checks configured
✓ Graceful shutdown implemented
✓ Error handling comprehensive
✓ Retry logic for transient failures
Operations:
✓ CI/CD pipeline configured
✓ Automated testing in place
✓ Documentation complete
✓ Runbooks created
✓ On-call rotation defined
✓ Incident response plan documented
✓ Rollback procedures tested
Netlify:
- Docs: https://docs.netlify.com/
- Support: https://www.netlify.com/support/
AWS:
- EC2: https://docs.aws.amazon.com/ec2/
- RDS: https://docs.aws.amazon.com/rds/
- Lambda: https://docs.aws.amazon.com/lambda/
- Elastic Beanstalk: https://docs.aws.amazon.com/elasticbeanstalk/
Frameworks:
- React: https://react.dev/
- Next.js: https://nextjs.org/docs
- Express: https://expressjs.com/
- NestJS: https://docs.nestjs.com/
Forums:
- Stack Overflow
- Reddit r/webdev, r/aws
- Dev.to
- Netlify Community
Discord/Slack:
- Reactiflux (React community)
- Nodeiflux (Node.js community)
- AWS Community
Courses:
- AWS Training: https://aws.amazon.com/training/
- Frontend Masters
- Udemy
- Pluralsight
Certifications:
- AWS Certified Solutions Architect
- AWS Certified Developer
- AWS Certified SysOps Administrator
This comprehensive guide covers everything needed to deploy a modern web application with:
- Frontend hosted on Netlify (global CDN, automatic SSL, easy deployment)
- Backend on AWS (multiple options: EC2, Elastic Beanstalk, Lambda, ECS)
- Database on AWS (RDS, DynamoDB, DocumentDB)
- Complete DevOps pipeline with CI/CD, monitoring, logging, and backups
- Production-ready security, performance, and reliability
- Start simple: Begin with Netlify + EC2 or Elastic Beanstalk
- Automate early: Setup CI/CD from the beginning
- Monitor everything: Implement logging and monitoring on day one
- Security first: Never compromise on security best practices
- Test backups: Regularly test your backup and restore procedures
- Optimize costs: Right-size resources and use reserved instances
- Document well: Keep runbooks and documentation updated
- Plan for scale: Design for growth from the start
- Choose your deployment option (EC2, EB, Lambda, or ECS)
- Setup development environment
- Configure CI/CD pipeline
- Deploy to staging environment
- Test thoroughly
- Deploy to production
- Monitor and optimize
- Iterate and improve
Good luck with your deployment! 🚀
Document Version: 1.0
Last Updated: November 19, 2025
Maintained by: YourAKShaw Inc.