A simple, self-contained PostgreSQL backup solution that runs as a Kamal accessory. It creates scheduled compressed database dumps, uploads them to S3-compatible storage, and automatically cleans up old backups.
We previously used kartoza/pg-backup but ran into issues with environment variables not being accessible inside cron jobs. Since our needs are simple (dump, upload, cleanup, heartbeat), a ~80-line bash script in a minimal container turned out to be more reliable and easier to debug.
┌─────────────┐ pg_dump ┌──────────┐ s3cmd put ┌──────────────┐
│ PostgreSQL │ ──────────────> │ backup │ ───────────────> │ S3-compatible│
│ Database │ │ container│ │ Storage │
└─────────────┘ └──────────┘ └──────────────┘
│
│ curl POST
▼
┌──────────────┐
│ Heartbeat │
│ Monitor │
└──────────────┘
- At each scheduled hour (default:
0 9 12 16UTC), the container runspg_dumpwith custom format and pipes it throughgzip - The compressed dump is uploaded to S3 with a timestamped filename:
2025-01-15-0900.mydb.dump.gz - Old backups beyond the retention period (default: 7 days) are automatically deleted
- A heartbeat POST request is sent to a monitoring endpoint (optional)
The container uses a simple while true + sleep 60 loop that checks the current UTC hour against configured schedule hours. No cron daemon needed — this avoids the common pitfall of cron not inheriting environment variables in Docker containers.
| File | Description |
|---|---|
Dockerfile |
Minimal image based on postgres:15 with s3cmd and curl |
backup.sh |
The backup script: dump, upload, cleanup, heartbeat |
deploy.yml |
Kamal accessory configuration (sanitized example) |
All configuration is done via environment variables:
| Variable | Required | Description |
|---|---|---|
PGDATABASE |
Yes | Database name to backup |
POSTGRES_HOST |
Yes | PostgreSQL host |
POSTGRES_PORT |
No | PostgreSQL port (default: 5432) |
POSTGRES_USER |
Yes | PostgreSQL user |
POSTGRES_PASS |
Yes | PostgreSQL password |
BUCKET |
Yes | S3 bucket and path (e.g. my-bucket/backups) |
HOST_BASE |
Yes | S3 endpoint (e.g. s3.amazonaws.com or your provider's endpoint) |
ACCESS_KEY_ID |
Yes | S3 access key |
SECRET_ACCESS_KEY |
Yes | S3 secret key |
SCHEDULE |
No | Space-separated UTC hours (default: 0 9 12 16) |
RETENTION_DAYS |
No | Days to keep backups (default: 7) |
HEARTBEAT_URL |
No | URL to POST after successful backup |
bin/kamal accessory boot db-backupbin/kamal accessory exec db-backup --reuse "/backup.sh --once"bin/kamal accessory logs db-backupbin/kamal accessory exec db-backup --reuse "s3cmd -c /tmp/.s3cfg ls s3://my-bucket/backups/"docker buildx build --platform linux/arm64 -t myorg/pg-backup --push docker/pg-backup/Download the .dump.gz file from S3, then restore with:
gunzip -c backup.dump.gz | pg_restore -h localhost -U postgres -d mydb --clean --no-owner