Author: Technical Analysis
Date: January 3, 2026
Stack: Docker Compose on UGREEN NAS (Linux)
Couch Commander is a fully automated media acquisition and streaming platform built on Docker, implementing a microservices architecture with VPN-isolated torrent traffic, automated quality management, and seamless user request handling. This writeup examines the technical architecture, network design patterns, and operational characteristics of the system.
- Container Orchestration: Docker Compose with health-based dependency management
- Networking: Custom Docker bridge network with static IP allocation
- VPN Gateway: Gluetun (OpenVPN) with iptables-based kill switch
- Download Client: qBittorrent (WebUI) with VPN network sharing
- Indexer Aggregation: Prowlarr with multi-indexer federation
- Content Automation: Sonarr (TV) and Radarr (Movies) with quality profiles
- User Interface: Jellyseerr for request management
- Media Server: Plex Media Server with auto-scanning
The architecture prioritizes:
- Network isolation - Torrent traffic strictly routed through VPN
- Service decoupling - Each service runs in its own container with defined APIs
- Automated recovery - Health checks and dependency ordering prevent cascading failures
- Resource control - CPU/memory limits prevent resource contention
- Observability - Structured logging and health endpoints for monitoring
The system uses a custom Docker bridge network (media-net) with a /16 subnet, providing 65,534 available IPs for horizontal scaling and clear service segmentation.
graph TB
subgraph Internet["🌐 Internet"]
VPN[ProtonVPN<br/>Netherlands/Sweden/US]
Indexers[Torrent Indexers<br/>via Prowlarr]
TMDB[The Movie DB<br/>Metadata]
end
subgraph NAS["UGREEN NAS - /volume1/plex-infra"]
subgraph Network["Docker Network: media-net (172.18.0.0/16)"]
subgraph VPNContainer["Gluetun VPN Gateway<br/>172.18.0.10"]
GluetunVPN[VPN Tunnel<br/>ProtonVPN OpenVPN]
QBT[qBittorrent<br/>:8080]
end
Prowlarr[Prowlarr<br/>172.18.0.20:9696<br/>Indexer Manager]
Sonarr[Sonarr<br/>172.18.0.30:8989<br/>TV Shows]
Radarr[Radarr<br/>172.18.0.40:7878<br/>Movies]
Jellyseerr[Jellyseerr<br/>172.18.0.50:5055<br/>Request Portal]
Plex[Plex Media Server<br/>:32400<br/>Streaming]
end
subgraph Storage["📁 Storage Volumes"]
Downloads[Downloads/<br/>Torrents]
Movies[Media/Movies/<br/>Final Library]
TVShows[Media/TV/<br/>Final Library]
Config[config/<br/>App Data]
end
end
subgraph Clients["👥 Clients"]
WebUI[Web Browser<br/>Access]
PlexApps[Plex Apps<br/>TV/Mobile/Desktop]
Users[Family/Friends]
end
%% Internet Connections
VPN <-->|Encrypted| GluetunVPN
Indexers <-->|Search/Download| Prowlarr
TMDB <-->|Metadata| Radarr
TMDB <-->|Metadata| Sonarr
%% VPN Container
GluetunVPN -.->|Network Mode:<br/>container:gluetun| QBT
%% Internal Container Communication
Prowlarr <-->|Indexer Sync| Sonarr
Prowlarr <-->|Indexer Sync| Radarr
Sonarr <-->|Download Request| QBT
Radarr <-->|Download Request| QBT
Jellyseerr <-->|TV Requests| Sonarr
Jellyseerr <-->|Movie Requests| Radarr
%% Storage Access
QBT -->|Write| Downloads
Sonarr -->|Move/Rename| TVShows
Radarr -->|Move/Rename| Movies
Plex -->|Read| Movies
Plex -->|Read| TVShows
%% Client Access
WebUI -->|HTTP| Jellyseerr
WebUI -->|HTTP| Prowlarr
WebUI -->|HTTP| Sonarr
WebUI -->|HTTP| Radarr
WebUI -->|HTTP| QBT
PlexApps <-->|HTTPS/HTTP| Plex
Users -->|Requests| Jellyseerr
%% Styling
classDef vpnStyle fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
classDef downloadStyle fill:#e74c3c,stroke:#c0392b,stroke-width:2px,color:#fff
classDef manageStyle fill:#2ecc71,stroke:#27ae60,stroke-width:2px,color:#fff
classDef mediaStyle fill:#9b59b6,stroke:#8e44ad,stroke-width:2px,color:#fff
classDef storageStyle fill:#f39c12,stroke:#d68910,stroke-width:2px,color:#fff
classDef clientStyle fill:#1abc9c,stroke:#16a085,stroke-width:2px,color:#fff
class GluetunVPN,QBT vpnStyle
class Prowlarr downloadStyle
class Sonarr,Radarr,Jellyseerr manageStyle
class Plex mediaStyle
class Downloads,Movies,TVShows,Config storageStyle
class WebUI,PlexApps,Users clientStyle
1. Static IP Allocation
Each service receives a predictable IP from the 172.18.0.0/16 range:
172.18.0.10 - Gluetun VPN Gateway
172.18.0.20 - Prowlarr (Indexer Manager)
172.18.0.30 - Sonarr (TV Automation)
172.18.0.40 - Radarr (Movie Automation)
172.18.0.50 - Jellyseerr (Request Portal)
Static IPs enable:
- Firewall rule stability - Rules don't break on container recreation
- Simplified inter-service communication - Services can reference each other by IP
- Network troubleshooting - Consistent addressing for packet captures and flow analysis
2. Custom Bridge Network
The media-net bridge (br-media) provides:
- Layer 2 isolation from the default Docker bridge
- DNS resolution - Containers can resolve each other by service name
- Multicast support - For service discovery protocols if needed
- MTU optimization - Can be tuned for jumbo frames on high-throughput networks
3. Subnet Sizing
The /16 CIDR provides 65,534 hosts, which is oversized for 6 services but allows:
- Horizontal scaling - Add multiple instances of any service for load balancing
- Service mesh expansion - Room for sidecar proxies (Envoy, Linkerd) if needed
- Testing environments - Spin up parallel stacks without IP conflicts
The VPN gateway implementation uses container network mode sharing, a Docker pattern where one container (qBittorrent) adopts the network stack of another (gluetun). This ensures all qBittorrent traffic is routed through the VPN tunnel with no possibility of leakage.
gluetun:
networks:
media-net:
ipv4_address: 172.18.0.10
qbittorrent:
network_mode: "container:gluetun"
depends_on:
gluetun:
condition: service_healthyHow it works:
gluetunstarts and joinsmedia-netwith IP172.18.0.10gluetunestablishes OpenVPN tunnel to ProtonVPNqBittorrentstarts withnetwork_mode: container:gluetun- qBittorrent's network namespace is replaced with gluetun's namespace
- All qBittorrent traffic flows through gluetun's interfaces (including VPN tunnel)
- qBittorrent has no direct access to the host network or media-net
Key implications:
- qBittorrent's WebUI is accessible via
gluetun's IP:http://172.18.0.10:8080 - Port mappings are defined on
gluetun, notqbittorrent - If the VPN tunnel fails, gluetun's iptables rules block all traffic (kill switch)
Gluetun implements a fail-secure kill switch using iptables:
# Simplified version of Gluetun's firewall rules
iptables -P OUTPUT DROP # Default drop all outbound
iptables -A OUTPUT -o tun0 -j ACCEPT # Allow VPN tunnel interface
iptables -A OUTPUT -o lo -j ACCEPT # Allow loopback
iptables -A OUTPUT -d 172.18.0.0/16 -j ACCEPT # Allow Docker network
iptables -A OUTPUT -d 192.168.1.0/24 -j ACCEPT # Allow LAN subnet
iptables -A OUTPUT -p udp --dport 1194 -j ACCEPT # Allow OpenVPN handshakeFirewall behavior:
- If
tun0(VPN interface) goes down, theOUTPUT -o tun0rule no longer matches - All torrent traffic is dropped by the
OUTPUT DROPdefault policy - Health checks fail, triggering container restart
- qBittorrent cannot leak traffic to clearnet
Configuration:
FIREWALL_OUTBOUND_SUBNETS=192.168.1.0/24,172.18.0.0/16This allows:
192.168.1.0/24- LAN access for WebUI from local network172.18.0.0/16- Docker network for API calls from Sonarr/Radarr
Gluetun's health check verifies VPN connectivity:
healthcheck:
test: ["CMD", "/gluetun-entrypoint", "healthcheck"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60sThe health check:
- Performs DNS resolution via VPN tunnel (not host resolver)
- TCP connects to
1.1.1.1:443andcloudflare.com:443 - Verifies packets are routed through
tun0interface - Returns success only if VPN is active and routing traffic
Dependent services wait:
qbittorrent:
depends_on:
gluetun:
condition: service_healthyqBittorrent won't start until Gluetun's health check passes, preventing torrent activity without VPN protection.
Docker Compose manages service startup order using health-based dependencies. This prevents race conditions where services attempt API calls before their dependencies are ready.
graph TD
Gluetun[Gluetun VPN]
QBT[qBittorrent]
Prowlarr
Sonarr
Radarr
Jellyseerr
Gluetun -->|healthy| QBT
Gluetun -->|healthy| Sonarr
Gluetun -->|healthy| Radarr
Prowlarr -->|healthy| Sonarr
Prowlarr -->|healthy| Radarr
Sonarr -->|healthy| Jellyseerr
Radarr -->|healthy| Jellyseerr
classDef criticalDep fill:#e74c3c,stroke:#c0392b,stroke-width:2px,color:#fff
class Gluetun criticalDep
Critical Path: Gluetun → (Sonarr, Radarr)
- Gluetun is the single point of failure for the automation pipeline
- If Gluetun fails, Sonarr and Radarr cannot send torrents to qBittorrent
- This is intentional - prevents clearnet torrent traffic
Indexer Federation: Prowlarr → (Sonarr, Radarr)
- Prowlarr aggregates multiple torrent indexers (e.g., 1337x, RARBG, Nyaa)
- Sonarr/Radarr sync indexer configurations via Prowlarr's API
- Without Prowlarr, Sonarr/Radarr have no search sources
User Interface: (Sonarr, Radarr) → Jellyseerr
- Jellyseerr acts as a unified request portal
- It proxies requests to Sonarr (TV) and Radarr (Movies) via their APIs
- Users don't need direct access to Sonarr/Radarr
Each service exposes a health endpoint:
Prowlarr/Sonarr/Radarr:
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9696/ping || exit 1"]- Hits the
/pingAPI endpoint - Returns
200 OKif the service is initialized and database is accessible - Fails during startup or database migration
qBittorrent:
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8080 || exit 1"]
start_period: 120s- WebUI takes ~90 seconds to start (slow first boot)
start_period: 120sprevents false failures during startup
Jellyseerr:
healthcheck:
test: ["CMD-SHELL", "wget -q --spider http://localhost:5055 || exit 1"]- Uses
wgetinstead ofcurl(Alpine-based image) --spiderperforms HEAD request (no body download)
The architecture gracefully degrades on partial failures:
| Failed Service | Impact | User Experience |
|---|---|---|
| Gluetun | Download pipeline halted | Existing media still streams, new requests queue |
| Prowlarr | Search unavailable | Manual torrent import still works |
| Jellyseerr | Request portal down | Direct API access to Sonarr/Radarr still possible |
| Plex | Streaming down | Downloads continue, library builds up |
No cascading failures: Docker Compose only restarts the failed container, not its dependents.
This section traces a movie request through the entire pipeline, from user click to Plex playback.
sequenceDiagram
participant User
participant Jellyseerr
participant Radarr
participant Prowlarr
participant qBittorrent
participant Gluetun
participant Indexers
participant Storage
participant Plex
User->>Jellyseerr: Request Movie
Jellyseerr->>Radarr: Create Movie Entry
Radarr->>Prowlarr: Search for Release
Prowlarr->>Indexers: Query Indexers
Indexers-->>Prowlarr: Return Results
Prowlarr-->>Radarr: Return Releases
Radarr->>Radarr: Filter by Quality Profile
Radarr->>qBittorrent: Send Torrent
qBittorrent->>Gluetun: Route via VPN
Gluetun->>Indexers: Download (Encrypted)
qBittorrent->>Storage: Save to Downloads/
qBittorrent-->>Radarr: Download Complete
Radarr->>Storage: Move to Media/Movies/
Radarr->>Storage: Rename per Format
Radarr-->>Jellyseerr: Update Status
Plex->>Storage: Auto-Scan Library
Plex->>Plex: Match Metadata
User->>Plex: Stream Movie
Step 1-2: User Request → Radarr API Call
POST /api/v1/request HTTP/1.1
Host: jellyseerr:5055
Content-Type: application/json
{
"mediaType": "movie",
"mediaId": 425274, // TMDB ID for "Now You See Me: Now You Don't"
"is4k": false
}Jellyseerr:
- Looks up TMDB ID 425274 to get metadata (title, year, cast)
- Calls Radarr API to create movie entry:
POST /api/v3/movie HTTP/1.1
Host: radarr:7878
X-Api-Key: 8354f94ef52e43939315654892023cc5
{
"title": "Now You See Me: Now You Don't",
"tmdbId": 425274,
"year": 2025,
"qualityProfileId": 6,
"monitored": true,
"addOptions": {
"searchForMovie": true
}
}Step 3-6: Radarr → Prowlarr → Indexers Radarr triggers an automatic search:
GET /api/v1/indexer/all/results?query=now+you+see+me+2025&categories=2000,2010
Host: prowlarr:9696
X-Api-Key: 4b7bd51caa9d4f2a8c4d0e7c8ceaa738Prowlarr:
- Fans out search to 10+ configured indexers in parallel
- Normalizes responses (different indexers use different APIs)
- Returns unified results:
[
{
"guid": "https://1337x.to/torrent/12345/",
"title": "Now.You.See.Me.Now.You.Dont.2025.2160p.WEB-DL.DDP5.1.H265-BTM",
"size": 10800000000,
"seeders": 448,
"indexer": "1337x"
}
]Step 7: Quality Filtering Radarr applies quality profile (ID 6):
{
"name": "HD - 720p/1080p",
"cutoff": 4,
"upgradeAllowed": false,
"items": [
{ "quality": "WEBDL-1080p", "allowed": true },
{ "quality": "Bluray-1080p", "allowed": true },
{ "quality": "WEBDL-720p", "allowed": true }
]
}- Filters out 2160p releases (too large)
- Filters out CAM/TS quality (too low)
- Selects highest-seeded 1080p WEB-DL
Step 8-9: Torrent Submission via VPN Radarr calls qBittorrent API:
POST /api/v2/torrents/add HTTP/1.1
Host: 172.18.0.10:8080
Content-Type: application/x-www-form-urlencoded
urls=magnet:?xt=urn:btih:abc123...&category=radarr&paused=falseqBittorrent:
- Resolves magnet link to torrent metadata via DHT
- Connects to peers through Gluetun's VPN tunnel (all TCP/UDP flows via
tun0) - Downloads pieces to
/data/Downloads/radarr/
Step 10-12: File Import and Renaming qBittorrent webhooks Radarr on completion:
POST /api/v3/command HTTP/1.1
Host: radarr:7878
{
"name": "DownloadedMoviesScan",
"path": "/data/Downloads/radarr/"
}Radarr:
- Scans download directory for completed files
- Verifies quality matches expected profile
- Atomic move to final location:
mv /data/Downloads/radarr/Now.You.See.Me.Now.You.Dont.2025.2160p.WEB-DL.DDP5.1.H265-BTM/movie.mkv \
/data/Media/Movies/Now You See Me - Now You Don't (2025)/Now You See Me - Now You Don't (2025).mkv- Deletes leftover files (NFO, samples)
- Updates Jellyseerr via webhook
Step 13-15: Plex Library Scan
Plex monitors /data/Media/Movies/ for file changes (inotify):
inotify_add_watch(fd, "/data/Media/Movies/", IN_CREATE | IN_MOVED_TO);On detecting new file:
- Runs Plex Media Scanner on the parent directory
- Extracts video metadata (codec, resolution, duration)
- Queries TMDB API for poster, synopsis, cast
- Generates video thumbnails and preview clips
- Adds to searchable library database
Step 16: User Streams via Plex User opens Plex client, which:
- Fetches library updates via
/library/sections/1/all - Displays movie with metadata
- On playback, Plex transcodes if needed:
# Plex Transcoder command (simplified)
/usr/lib/plexmediaserver/Plex\ Transcoder \
-i "/data/Media/Movies/Now You See Me - Now You Don't (2025)/movie.mkv" \
-codec:v:0 h264 -profile:v:0 high -level:v:0 4.1 \
-maxrate:v:0 4000k -bufsize:v:0 8000k \
-f hls /transcode/session-abc123/stream.m3u8- Streams via HLS over HTTP
The storage layer uses Docker bind mounts to map host directories into containers, enabling shared access to media files.
Host: /volume1/plex-infra/
├── Downloads/ # qBittorrent download directory
│ ├── radarr/ # Movie downloads (managed by Radarr)
│ └── sonarr/ # TV downloads (managed by Sonarr)
├── Media/ # Final media library
│ ├── Movies/ # Plex Movies library
│ │ └── [Movie Title (Year)]/ # Per-movie folders
│ │ └── [Movie Title (Year)].mkv
│ └── TV/ # Plex TV library
│ └── [Show Title]/ # Per-show folders
│ └── Season XX/
│ └── [Show] - sXXeYY - [Episode].mkv
└── config/ # Application data
├── gluetun/
├── prowlarr/
├── radarr/
└── sonarr/
Container perspective:
volumes:
- /volume1/plex-infra:/data # Radarr, Sonarr, qBittorrent
- ./config/radarr:/config # Radarr configInside Radarr container:
/data/Downloads/is where qBittorrent writes completed downloads/data/Media/Movies/is the final import destination- No
mvacross filesystems (same mount point = atomic move)
Why atomic moves matter: Traditional copy-then-delete is non-atomic:
# Non-atomic (BAD)
cp /data/Downloads/movie.mkv /data/Media/Movies/movie.mkv # Slow, I/O heavy
rm /data/Downloads/movie.mkv # Leaves window where file exists twiceAtomic move (same filesystem):
# Atomic move (GOOD)
mv /data/Downloads/movie.mkv /data/Media/Movies/movie.mkvHow it works:
- Kernel updates inode's parent directory pointer
- No data blocks are copied (instant operation)
- No window where file is partially written
- Plex sees complete file immediately (no partial scans)
Implementation in Radarr:
Radarr uses rename(2) syscall:
rename("/data/Downloads/radarr/movie.mkv", "/data/Media/Movies/Title (Year)/movie.mkv");If source and destination are on different filesystems, Radarr falls back to copy-then-delete.
Containers run as non-root user via PUID and PGID:
PUID=1000 # User 'joey'
PGID=10 # Group 'admin'Inside containers:
# Container sees files owned by UID 1000
$ ls -l /data/Media/Movies/
drwxr-xr-x joey admin 4096 Dec 28 03:00 Now You See Me (2025)/On host:
# Host sees same files
$ ls -l /volume1/plex-infra/Media/Movies/
drwxr-xr-x joey admin 4096 Dec 28 03:00 Now You See Me (2025)/Why this matters:
- qBittorrent writes files as UID 1000
- Radarr moves files as UID 1000 (no permission errors)
- Plex reads files as UID 1000 (no access denied)
- All services share the same user context
Direct I/O for large files:
Plex and qBittorrent can benefit from O_DIRECT flag to bypass page cache for multi-GB files:
int fd = open("/data/Media/movie.mkv", O_RDONLY | O_DIRECT);- Reduces memory pressure on NAS
- Prevents cache pollution from streaming workloads
- Requires aligned reads (512-byte or 4K blocks)
XFS allocation groups (if using XFS):
mkfs.xfs -d agcount=32 /dev/sda1- Spreads metadata across 32 allocation groups
- Enables parallel I/O for concurrent downloads
- Reduces lock contention in directory operations
Isolation via Docker networks:
networks:
media-net:
driver: bridge
ipam:
config:
- subnet: 172.18.0.0/16Attack surface reduction:
- Containers on
media-netcannot directly reach host network - Inter-container traffic flows through Docker proxy (inspectable via
tcpdump) - Containers on different networks cannot communicate (Docker firewall rules)
Example iptables rules (auto-generated by Docker):
iptables -A DOCKER-ISOLATION-STAGE-1 \
-i br-media -o docker0 -j DROPThis prevents containers on media-net from reaching containers on default bridge (docker0).
Defense in depth:
- Gluetun iptables rules - Block non-VPN traffic at container level
- Docker network isolation - qBittorrent has no route to clearnet
- Health check termination - If VPN fails, Docker restarts container (stops torrents)
Leak test:
# Inside qBittorrent container
curl ifconfig.me
# Returns VPN exit node IP, never host IPCurrent implementation:
RADARR_API_KEY=8354f94ef52e43939315654892023cc5- Stored in
.envfile (plain text) - Mounted as environment variables
Production improvements:
- Use Docker secrets (encrypted at rest):
secrets:
radarr_api_key:
external: true
services:
radarr:
secrets:
- radarr_api_key- Rotate keys quarterly
- Scope keys per service (Jellyseerr gets read-only Radarr key)
Read-only root filesystem:
services:
prowlarr:
read_only: true
tmpfs:
- /tmp- Prevents malware from persisting in container filesystem
- Forces all writes to volumes (easier to audit)
Capability dropping:
services:
prowlarr:
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID- Drops all Linux capabilities by default
- Only grants necessary caps (user switching)
AppArmor/SELinux profiles:
services:
qbittorrent:
security_opt:
- apparmor=docker-default- Restricts syscalls (e.g., block
mount,reboot) - Prevents container escape exploits
services:
gluetun:
cpus: '0.5'
mem_limit: 1gWhy limit resources?
- Prevent noisy neighbor - qBittorrent can't starve Plex during heavy downloads
- Predictable performance - Sonarr API latency stays consistent
- OOM protection - Container dies instead of crashing entire NAS
Limit sizing logic:
| Service | CPU | Memory | Justification |
|---|---|---|---|
| Gluetun | 0.5 | 1 GB | VPN encryption is CPU-bound, minimal memory |
| qBittorrent | 2.0 | 4 GB | Handles 100+ torrents, large peer tables |
| Prowlarr | 1.0 | 1 GB | I/O-bound (SQLite), infrequent queries |
| Sonarr | 1.5 | 2 GB | Episode matching is CPU-heavy, large DB |
| Radarr | 1.5 | 2 GB | Same as Sonarr |
| Jellyseerr | 1.0 | 1 GB | Lightweight Node.js app |
| Plex | ∞ | ∞ | Transcoding needs all available resources |
Plex transcoding bottleneck:
# 4K → 1080p transcode can use 8+ CPU cores
Plex Transcoder:
CPU: 800% (8 cores)
Memory: 6 GB (buffer + decoder state)Plex intentionally has no limits to prioritize user experience.
Docker CPU scheduler (CFS):
cpus: '1.5'Translates to:
cpu.cfs_quota_us = 150000 # 1.5 * 100ms period
cpu.cfs_period_us = 100000- Container gets max 150ms of CPU time per 100ms period
- Excess cycles go to other containers
Memory cgroup limits:
mem_limit: 2gTriggers OOM killer when RSS exceeds 2 GB:
oom_kill_process(radarr) -> SIGKILLDocker restarts container automatically.
Container metrics (Prometheus exporter):
services:
cadvisor:
image: gcr.io/cadvisor/cadvisor
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- "8081:8080"Exposes metrics:
container_cpu_usage_seconds_total{name="radarr"} 1234.56
container_memory_usage_bytes{name="radarr"} 512000000
Application metrics:
- Radarr/Sonarr expose Prometheus endpoint on
/metrics - qBittorrent logs JSON events to stdout (parseable by Loki)
- Gluetun logs VPN reconnect events
Alerting thresholds:
# Alert if qBittorrent uses >90% of 4GB limit
- alert: QBittorrentHighMemory
expr: container_memory_usage_bytes{name="qbittorrent"} > 3600000000
for: 5mBest practices implemented:
- Fast failure detection:
interval: 30s
timeout: 10s
retries: 5- Check every 30s
- Fail if no response in 10s
- Restart after 5 consecutive failures (2.5 minutes)
- Startup grace period:
start_period: 120s- Don't count failures in first 2 minutes
- Allows slow initialization (DB migrations)
- Lightweight checks:
curl -f http://localhost:9696/ping/pingreturns200 OKwithout DB query- Avoids false failures during high load
Docker restart policies:
restart: unless-stopped- Restart on crash (exit code ≠ 0)
- Restart on OOM kill
- Don't restart if manually stopped (
docker stop)
Exponential backoff: Docker waits progressively longer between restarts:
Attempt 1: Immediate
Attempt 2: 1 second
Attempt 3: 2 seconds
Attempt 4: 4 seconds
...
Attempt N: min(2^N seconds, 1 minute)
Cascading recovery: If Gluetun crashes:
- qBittorrent health check fails (can't reach
localhost:8080) - Docker restarts qBittorrent
- qBittorrent waits for Gluetun to become healthy
- Pipeline resumes
Stack lifecycle:
# manage-stack.sh
./manage-stack.sh start # docker-compose up -d
./manage-stack.sh stop # docker-compose stop
./manage-stack.sh restart # docker-compose restart
./manage-stack.sh logs radarr # docker-compose logs -f radarrHealth monitoring:
# healthcheck-services.sh
#!/bin/bash
for service in gluetun prowlarr sonarr radarr jellyseerr; do
health=$(docker inspect --format='{{.State.Health.Status}}' $service)
echo "$service: $health"
doneFailed torrent cleanup:
# cleanup-failed-torrents.sh
curl -X POST http://localhost:8080/api/v2/torrents/delete \
-d 'hashes=...' -d 'deleteFiles=true'Update procedure:
# 1. Pull latest images
docker-compose pull
# 2. Recreate containers (zero-downtime for stateless services)
docker-compose up -d
# 3. Verify health
./healthcheck-services.shBackup strategy:
# Backup application configs (SQLite DBs)
tar -czf backup-$(date +%Y%m%d).tar.gz ./config/
# Media files are immutable (no backup needed, re-download if lost)Log rotation:
services:
radarr:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"- Keep max 3 files × 10 MB = 30 MB per service
- Prevents
/var/lib/dockerfrom filling disk
| Service | URL | Purpose |
|---|---|---|
| qBittorrent | http://nas-ip:8080 | Torrent management |
| Prowlarr | http://nas-ip:9696 | Indexer config |
| Sonarr | http://nas-ip:8989 | TV automation |
| Radarr | http://nas-ip:7878 | Movie automation |
| Jellyseerr | http://nas-ip:5055 | User requests |
| Plex | http://nas-ip:32400/web | Media streaming |
All services use API keys in HTTP headers:
curl http://radarr:7878/api/v3/movie \
-H "X-Api-Key: 8354f94ef52e43939315654892023cc5"# Check VPN status
docker exec gluetun curl ifconfig.me
# View Radarr logs
docker-compose logs -f --tail 100 radarr
# Force library scan
curl -X POST http://localhost:32400/library/sections/1/refresh?X-Plex-Token=...
# Restart failed service
docker-compose restart sonarrEnd of Technical Deep Dive
This architecture demonstrates production-ready patterns for containerized media automation: defense-in-depth security with VPN isolation, health-based orchestration for reliable startup, atomic file operations for data integrity, and resource limits for predictable performance. The design prioritizes operational simplicity while maintaining security and reliability.