Skip to content

Instantly share code, notes, and snippets.

@jankeesvw
Created March 11, 2026 05:49
Show Gist options
  • Select an option

  • Save jankeesvw/5e0ebfc5c292b4df9309b792ea049e6d to your computer and use it in GitHub Desktop.

Select an option

Save jankeesvw/5e0ebfc5c292b4df9309b792ea049e6d to your computer and use it in GitHub Desktop.
PostgreSQL backup to S3 with Kamal — simple custom container replacing kartoza/pg-backup

PostgreSQL Backup to S3 with Kamal

A simple, self-contained PostgreSQL backup solution that runs as a Kamal accessory. It creates scheduled compressed database dumps, uploads them to S3-compatible storage, and automatically cleans up old backups.

Why a custom container?

We previously used kartoza/pg-backup but ran into issues with environment variables not being accessible inside cron jobs. Since our needs are simple (dump, upload, cleanup, heartbeat), a ~80-line bash script in a minimal container turned out to be more reliable and easier to debug.

How it works

┌─────────────┐     pg_dump      ┌──────────┐     s3cmd put     ┌──────────────┐
│  PostgreSQL  │ ──────────────> │  backup   │ ───────────────> │  S3-compatible│
│  Database    │                 │  container│                  │  Storage      │
└─────────────┘                 └──────────┘                  └──────────────┘
                                      │
                                      │ curl POST
                                      ▼
                                ┌──────────────┐
                                │  Heartbeat   │
                                │  Monitor     │
                                └──────────────┘

Backup cycle

  1. At each scheduled hour (default: 0 9 12 16 UTC), the container runs pg_dump with custom format and pipes it through gzip
  2. The compressed dump is uploaded to S3 with a timestamped filename: 2025-01-15-0900.mydb.dump.gz
  3. Old backups beyond the retention period (default: 7 days) are automatically deleted
  4. A heartbeat POST request is sent to a monitoring endpoint (optional)

Scheduling

The container uses a simple while true + sleep 60 loop that checks the current UTC hour against configured schedule hours. No cron daemon needed — this avoids the common pitfall of cron not inheriting environment variables in Docker containers.

Files

File Description
Dockerfile Minimal image based on postgres:15 with s3cmd and curl
backup.sh The backup script: dump, upload, cleanup, heartbeat
deploy.yml Kamal accessory configuration (sanitized example)

Configuration

All configuration is done via environment variables:

Variable Required Description
PGDATABASE Yes Database name to backup
POSTGRES_HOST Yes PostgreSQL host
POSTGRES_PORT No PostgreSQL port (default: 5432)
POSTGRES_USER Yes PostgreSQL user
POSTGRES_PASS Yes PostgreSQL password
BUCKET Yes S3 bucket and path (e.g. my-bucket/backups)
HOST_BASE Yes S3 endpoint (e.g. s3.amazonaws.com or your provider's endpoint)
ACCESS_KEY_ID Yes S3 access key
SECRET_ACCESS_KEY Yes S3 secret key
SCHEDULE No Space-separated UTC hours (default: 0 9 12 16)
RETENTION_DAYS No Days to keep backups (default: 7)
HEARTBEAT_URL No URL to POST after successful backup

Usage

Deploy with Kamal

bin/kamal accessory boot db-backup

Run a one-time backup

bin/kamal accessory exec db-backup --reuse "/backup.sh --once"

View logs

bin/kamal accessory logs db-backup

List backups in S3

bin/kamal accessory exec db-backup --reuse "s3cmd -c /tmp/.s3cfg ls s3://my-bucket/backups/"

Rebuild and push image

docker buildx build --platform linux/arm64 -t myorg/pg-backup --push docker/pg-backup/

Restoring a backup

Download the .dump.gz file from S3, then restore with:

gunzip -c backup.dump.gz | pg_restore -h localhost -U postgres -d mydb --clean --no-owner
#!/bin/bash
set -euo pipefail
echo "Starting pg-backup container"
echo "Database: ${PGDATABASE}"
echo "Bucket: ${BUCKET}"
echo "Schedule hours (UTC): ${SCHEDULE:-0 9 12 16}"
echo "Retention days: ${RETENTION_DAYS:-7}"
cat > /tmp/.s3cfg <<EOF
[default]
access_key = ${ACCESS_KEY_ID}
secret_key = ${SECRET_ACCESS_KEY}
host_base = ${HOST_BASE}
host_bucket = %(bucket)s.${HOST_BASE}
use_https = True
EOF
export PGPASSWORD="${POSTGRES_PASS}"
run_backup() {
local timestamp
timestamp=$(date -u +%Y-%m-%d-%H%M)
local filename="${timestamp}.${PGDATABASE}.dump.gz"
local s3_path="s3://${BUCKET}/${filename}"
echo "[$(date -u)] Starting backup: ${filename}"
pg_dump -h "${POSTGRES_HOST}" -p "${POSTGRES_PORT:-5432}" -U "${POSTGRES_USER}" -Fc --clean "${PGDATABASE}" | gzip > /tmp/backup.gz
s3cmd -c /tmp/.s3cfg put /tmp/backup.gz "${s3_path}"
rm -f /tmp/backup.gz
echo "[$(date -u)] Uploaded ${s3_path}"
local cutoff
cutoff=$(date -u -d "-${RETENTION_DAYS:-7} days" +%Y-%m-%d)
echo "[$(date -u)] Removing backups older than ${cutoff}"
s3cmd -c /tmp/.s3cfg ls "s3://${BUCKET}/" | while read -r line; do
local file
file=$(echo "${line}" | awk '{print $4}')
local basename
basename=$(basename "${file}")
local file_date
file_date=$(echo "${basename}" | grep -oP '^\d{4}-\d{2}-\d{2}' || true)
if [[ -n "${file_date}" && "${file_date}" < "${cutoff}" ]]; then
echo "[$(date -u)] Deleting old backup: ${file}"
s3cmd -c /tmp/.s3cfg del "${file}"
fi
done
if [[ -n "${HEARTBEAT_URL:-}" ]]; then
curl -s -X POST "${HEARTBEAT_URL}" || echo "[$(date -u)] Heartbeat failed"
fi
echo "[$(date -u)] Backup complete"
}
if [[ "${1:-}" == "--once" ]]; then
run_backup
exit 0
fi
IFS=' ' read -ra SCHEDULE_HOURS <<< "${SCHEDULE:-0 9 12 16}"
echo "Waiting for next scheduled backup..."
while true; do
current_hour=$(date -u +%-H)
current_min=$(date -u +%-M)
for hour in "${SCHEDULE_HOURS[@]}"; do
if [[ "${current_hour}" -eq "${hour}" && "${current_min}" -eq 0 ]]; then
run_backup || echo "[$(date -u)] Backup failed"
fi
done
sleep 60
done
# Kamal deploy.yml — only the db-backup accessory section shown.
# Secrets come from .kamal/secrets (loaded automatically by Kamal).
accessories:
db-backup:
image: myorg/pg-backup
host: 10.0.0.1 # your server IP
env:
clear:
POSTGRES_PORT: 5432
PGDATABASE: myapp_production
BUCKET: my-bucket/backups
HOST_BASE: s3.eu-central-1.amazonaws.com # or your S3-compatible endpoint
SCHEDULE: "0 9 12 16"
RETENTION_DAYS: 7
HEARTBEAT_URL: "https://myapp.example.com/healthcheck/backup"
secret:
- ACCESS_KEY_ID
- SECRET_ACCESS_KEY
- POSTGRES_USER
- POSTGRES_PASS
- POSTGRES_HOST
FROM postgres:15
RUN apt-get update && apt-get install -y --no-install-recommends s3cmd ca-certificates curl && rm -rf /var/lib/apt/lists/*
COPY backup.sh /backup.sh
RUN chmod +x /backup.sh
CMD ["/backup.sh"]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment