Skip to content

Instantly share code, notes, and snippets.

@jmcdice
Last active January 4, 2026 04:26
Show Gist options
  • Select an option

  • Save jmcdice/f9a0a20165cf5d113e5267338bdae964 to your computer and use it in GitHub Desktop.

Select an option

Save jmcdice/f9a0a20165cf5d113e5267338bdae964 to your computer and use it in GitHub Desktop.
Couch Commander: Production-Grade Media Automation - Technical Deep Dive

Couch Commander: A Production-Grade Media Automation Stack - Technical Deep Dive

Author: Technical Analysis
Date: January 3, 2026
Stack: Docker Compose on UGREEN NAS (Linux)


1. Introduction

Couch Commander is a fully automated media acquisition and streaming platform built on Docker, implementing a microservices architecture with VPN-isolated torrent traffic, automated quality management, and seamless user request handling. This writeup examines the technical architecture, network design patterns, and operational characteristics of the system.

Technology Stack

  • Container Orchestration: Docker Compose with health-based dependency management
  • Networking: Custom Docker bridge network with static IP allocation
  • VPN Gateway: Gluetun (OpenVPN) with iptables-based kill switch
  • Download Client: qBittorrent (WebUI) with VPN network sharing
  • Indexer Aggregation: Prowlarr with multi-indexer federation
  • Content Automation: Sonarr (TV) and Radarr (Movies) with quality profiles
  • User Interface: Jellyseerr for request management
  • Media Server: Plex Media Server with auto-scanning

Design Philosophy

The architecture prioritizes:

  1. Network isolation - Torrent traffic strictly routed through VPN
  2. Service decoupling - Each service runs in its own container with defined APIs
  3. Automated recovery - Health checks and dependency ordering prevent cascading failures
  4. Resource control - CPU/memory limits prevent resource contention
  5. Observability - Structured logging and health endpoints for monitoring

2. Network Architecture Deep Dive

The system uses a custom Docker bridge network (media-net) with a /16 subnet, providing 65,534 available IPs for horizontal scaling and clear service segmentation.

graph TB
    subgraph Internet["🌐 Internet"]
        VPN[ProtonVPN<br/>Netherlands/Sweden/US]
        Indexers[Torrent Indexers<br/>via Prowlarr]
        TMDB[The Movie DB<br/>Metadata]
    end
    
    subgraph NAS["UGREEN NAS - /volume1/plex-infra"]
        subgraph Network["Docker Network: media-net (172.18.0.0/16)"]
            
            subgraph VPNContainer["Gluetun VPN Gateway<br/>172.18.0.10"]
                GluetunVPN[VPN Tunnel<br/>ProtonVPN OpenVPN]
                QBT[qBittorrent<br/>:8080]
            end
            
            Prowlarr[Prowlarr<br/>172.18.0.20:9696<br/>Indexer Manager]
            
            Sonarr[Sonarr<br/>172.18.0.30:8989<br/>TV Shows]
            
            Radarr[Radarr<br/>172.18.0.40:7878<br/>Movies]
            
            Jellyseerr[Jellyseerr<br/>172.18.0.50:5055<br/>Request Portal]
            
            Plex[Plex Media Server<br/>:32400<br/>Streaming]
        end
        
        subgraph Storage["📁 Storage Volumes"]
            Downloads[Downloads/<br/>Torrents]
            Movies[Media/Movies/<br/>Final Library]
            TVShows[Media/TV/<br/>Final Library]
            Config[config/<br/>App Data]
        end
    end
    
    subgraph Clients["👥 Clients"]
        WebUI[Web Browser<br/>Access]
        PlexApps[Plex Apps<br/>TV/Mobile/Desktop]
        Users[Family/Friends]
    end
    
    %% Internet Connections
    VPN <-->|Encrypted| GluetunVPN
    Indexers <-->|Search/Download| Prowlarr
    TMDB <-->|Metadata| Radarr
    TMDB <-->|Metadata| Sonarr
    
    %% VPN Container
    GluetunVPN -.->|Network Mode:<br/>container:gluetun| QBT
    
    %% Internal Container Communication
    Prowlarr <-->|Indexer Sync| Sonarr
    Prowlarr <-->|Indexer Sync| Radarr
    Sonarr <-->|Download Request| QBT
    Radarr <-->|Download Request| QBT
    Jellyseerr <-->|TV Requests| Sonarr
    Jellyseerr <-->|Movie Requests| Radarr
    
    %% Storage Access
    QBT -->|Write| Downloads
    Sonarr -->|Move/Rename| TVShows
    Radarr -->|Move/Rename| Movies
    Plex -->|Read| Movies
    Plex -->|Read| TVShows
    
    %% Client Access
    WebUI -->|HTTP| Jellyseerr
    WebUI -->|HTTP| Prowlarr
    WebUI -->|HTTP| Sonarr
    WebUI -->|HTTP| Radarr
    WebUI -->|HTTP| QBT
    PlexApps <-->|HTTPS/HTTP| Plex
    Users -->|Requests| Jellyseerr
    
    %% Styling
    classDef vpnStyle fill:#4a90e2,stroke:#2e5c8a,stroke-width:3px,color:#fff
    classDef downloadStyle fill:#e74c3c,stroke:#c0392b,stroke-width:2px,color:#fff
    classDef manageStyle fill:#2ecc71,stroke:#27ae60,stroke-width:2px,color:#fff
    classDef mediaStyle fill:#9b59b6,stroke:#8e44ad,stroke-width:2px,color:#fff
    classDef storageStyle fill:#f39c12,stroke:#d68910,stroke-width:2px,color:#fff
    classDef clientStyle fill:#1abc9c,stroke:#16a085,stroke-width:2px,color:#fff
    
    class GluetunVPN,QBT vpnStyle
    class Prowlarr downloadStyle
    class Sonarr,Radarr,Jellyseerr manageStyle
    class Plex mediaStyle
    class Downloads,Movies,TVShows,Config storageStyle
    class WebUI,PlexApps,Users clientStyle
Loading

Network Design Rationale

1. Static IP Allocation
Each service receives a predictable IP from the 172.18.0.0/16 range:

172.18.0.10  - Gluetun VPN Gateway
172.18.0.20  - Prowlarr (Indexer Manager)
172.18.0.30  - Sonarr (TV Automation)
172.18.0.40  - Radarr (Movie Automation)
172.18.0.50  - Jellyseerr (Request Portal)

Static IPs enable:

  • Firewall rule stability - Rules don't break on container recreation
  • Simplified inter-service communication - Services can reference each other by IP
  • Network troubleshooting - Consistent addressing for packet captures and flow analysis

2. Custom Bridge Network
The media-net bridge (br-media) provides:

  • Layer 2 isolation from the default Docker bridge
  • DNS resolution - Containers can resolve each other by service name
  • Multicast support - For service discovery protocols if needed
  • MTU optimization - Can be tuned for jumbo frames on high-throughput networks

3. Subnet Sizing
The /16 CIDR provides 65,534 hosts, which is oversized for 6 services but allows:

  • Horizontal scaling - Add multiple instances of any service for load balancing
  • Service mesh expansion - Room for sidecar proxies (Envoy, Linkerd) if needed
  • Testing environments - Spin up parallel stacks without IP conflicts

3. VPN Gateway Pattern with Gluetun

The VPN gateway implementation uses container network mode sharing, a Docker pattern where one container (qBittorrent) adopts the network stack of another (gluetun). This ensures all qBittorrent traffic is routed through the VPN tunnel with no possibility of leakage.

Container Network Sharing

gluetun:
  networks:
    media-net:
      ipv4_address: 172.18.0.10
  
qbittorrent:
  network_mode: "container:gluetun"
  depends_on:
    gluetun:
      condition: service_healthy

How it works:

  1. gluetun starts and joins media-net with IP 172.18.0.10
  2. gluetun establishes OpenVPN tunnel to ProtonVPN
  3. qBittorrent starts with network_mode: container:gluetun
  4. qBittorrent's network namespace is replaced with gluetun's namespace
  5. All qBittorrent traffic flows through gluetun's interfaces (including VPN tunnel)
  6. qBittorrent has no direct access to the host network or media-net

Key implications:

  • qBittorrent's WebUI is accessible via gluetun's IP: http://172.18.0.10:8080
  • Port mappings are defined on gluetun, not qbittorrent
  • If the VPN tunnel fails, gluetun's iptables rules block all traffic (kill switch)

Kill Switch Implementation

Gluetun implements a fail-secure kill switch using iptables:

# Simplified version of Gluetun's firewall rules
iptables -P OUTPUT DROP                           # Default drop all outbound
iptables -A OUTPUT -o tun0 -j ACCEPT              # Allow VPN tunnel interface
iptables -A OUTPUT -o lo -j ACCEPT                # Allow loopback
iptables -A OUTPUT -d 172.18.0.0/16 -j ACCEPT     # Allow Docker network
iptables -A OUTPUT -d 192.168.1.0/24 -j ACCEPT    # Allow LAN subnet
iptables -A OUTPUT -p udp --dport 1194 -j ACCEPT  # Allow OpenVPN handshake

Firewall behavior:

  • If tun0 (VPN interface) goes down, the OUTPUT -o tun0 rule no longer matches
  • All torrent traffic is dropped by the OUTPUT DROP default policy
  • Health checks fail, triggering container restart
  • qBittorrent cannot leak traffic to clearnet

Configuration:

FIREWALL_OUTBOUND_SUBNETS=192.168.1.0/24,172.18.0.0/16

This allows:

  • 192.168.1.0/24 - LAN access for WebUI from local network
  • 172.18.0.0/16 - Docker network for API calls from Sonarr/Radarr

Health Check Design

Gluetun's health check verifies VPN connectivity:

healthcheck:
  test: ["CMD", "/gluetun-entrypoint", "healthcheck"]
  interval: 30s
  timeout: 10s
  retries: 5
  start_period: 60s

The health check:

  1. Performs DNS resolution via VPN tunnel (not host resolver)
  2. TCP connects to 1.1.1.1:443 and cloudflare.com:443
  3. Verifies packets are routed through tun0 interface
  4. Returns success only if VPN is active and routing traffic

Dependent services wait:

qbittorrent:
  depends_on:
    gluetun:
      condition: service_healthy

qBittorrent won't start until Gluetun's health check passes, preventing torrent activity without VPN protection.


4. Service Orchestration and Dependencies

Docker Compose manages service startup order using health-based dependencies. This prevents race conditions where services attempt API calls before their dependencies are ready.

graph TD
    Gluetun[Gluetun VPN]
    QBT[qBittorrent]
    Prowlarr
    Sonarr
    Radarr
    Jellyseerr
    
    Gluetun -->|healthy| QBT
    Gluetun -->|healthy| Sonarr
    Gluetun -->|healthy| Radarr
    Prowlarr -->|healthy| Sonarr
    Prowlarr -->|healthy| Radarr
    Sonarr -->|healthy| Jellyseerr
    Radarr -->|healthy| Jellyseerr
    
    classDef criticalDep fill:#e74c3c,stroke:#c0392b,stroke-width:2px,color:#fff
    class Gluetun criticalDep
Loading

Dependency Graph Analysis

Critical Path: Gluetun → (Sonarr, Radarr)

  • Gluetun is the single point of failure for the automation pipeline
  • If Gluetun fails, Sonarr and Radarr cannot send torrents to qBittorrent
  • This is intentional - prevents clearnet torrent traffic

Indexer Federation: Prowlarr → (Sonarr, Radarr)

  • Prowlarr aggregates multiple torrent indexers (e.g., 1337x, RARBG, Nyaa)
  • Sonarr/Radarr sync indexer configurations via Prowlarr's API
  • Without Prowlarr, Sonarr/Radarr have no search sources

User Interface: (Sonarr, Radarr) → Jellyseerr

  • Jellyseerr acts as a unified request portal
  • It proxies requests to Sonarr (TV) and Radarr (Movies) via their APIs
  • Users don't need direct access to Sonarr/Radarr

Health Check Implementation

Each service exposes a health endpoint:

Prowlarr/Sonarr/Radarr:

healthcheck:
  test: ["CMD-SHELL", "curl -f http://localhost:9696/ping || exit 1"]
  • Hits the /ping API endpoint
  • Returns 200 OK if the service is initialized and database is accessible
  • Fails during startup or database migration

qBittorrent:

healthcheck:
  test: ["CMD-SHELL", "curl -f http://localhost:8080 || exit 1"]
  start_period: 120s
  • WebUI takes ~90 seconds to start (slow first boot)
  • start_period: 120s prevents false failures during startup

Jellyseerr:

healthcheck:
  test: ["CMD-SHELL", "wget -q --spider http://localhost:5055 || exit 1"]
  • Uses wget instead of curl (Alpine-based image)
  • --spider performs HEAD request (no body download)

Graceful Degradation

The architecture gracefully degrades on partial failures:

Failed Service Impact User Experience
Gluetun Download pipeline halted Existing media still streams, new requests queue
Prowlarr Search unavailable Manual torrent import still works
Jellyseerr Request portal down Direct API access to Sonarr/Radarr still possible
Plex Streaming down Downloads continue, library builds up

No cascading failures: Docker Compose only restarts the failed container, not its dependents.


5. Data Flow: Request to Stream

This section traces a movie request through the entire pipeline, from user click to Plex playback.

sequenceDiagram
    participant User
    participant Jellyseerr
    participant Radarr
    participant Prowlarr
    participant qBittorrent
    participant Gluetun
    participant Indexers
    participant Storage
    participant Plex
    
    User->>Jellyseerr: Request Movie
    Jellyseerr->>Radarr: Create Movie Entry
    Radarr->>Prowlarr: Search for Release
    Prowlarr->>Indexers: Query Indexers
    Indexers-->>Prowlarr: Return Results
    Prowlarr-->>Radarr: Return Releases
    Radarr->>Radarr: Filter by Quality Profile
    Radarr->>qBittorrent: Send Torrent
    qBittorrent->>Gluetun: Route via VPN
    Gluetun->>Indexers: Download (Encrypted)
    qBittorrent->>Storage: Save to Downloads/
    qBittorrent-->>Radarr: Download Complete
    Radarr->>Storage: Move to Media/Movies/
    Radarr->>Storage: Rename per Format
    Radarr-->>Jellyseerr: Update Status
    Plex->>Storage: Auto-Scan Library
    Plex->>Plex: Match Metadata
    User->>Plex: Stream Movie
Loading

Step-by-Step Breakdown

Step 1-2: User Request → Radarr API Call

POST /api/v1/request HTTP/1.1
Host: jellyseerr:5055
Content-Type: application/json

{
  "mediaType": "movie",
  "mediaId": 425274,  // TMDB ID for "Now You See Me: Now You Don't"
  "is4k": false
}

Jellyseerr:

  1. Looks up TMDB ID 425274 to get metadata (title, year, cast)
  2. Calls Radarr API to create movie entry:
POST /api/v3/movie HTTP/1.1
Host: radarr:7878
X-Api-Key: 8354f94ef52e43939315654892023cc5

{
  "title": "Now You See Me: Now You Don't",
  "tmdbId": 425274,
  "year": 2025,
  "qualityProfileId": 6,
  "monitored": true,
  "addOptions": {
    "searchForMovie": true
  }
}

Step 3-6: Radarr → Prowlarr → Indexers Radarr triggers an automatic search:

GET /api/v1/indexer/all/results?query=now+you+see+me+2025&categories=2000,2010
Host: prowlarr:9696
X-Api-Key: 4b7bd51caa9d4f2a8c4d0e7c8ceaa738

Prowlarr:

  1. Fans out search to 10+ configured indexers in parallel
  2. Normalizes responses (different indexers use different APIs)
  3. Returns unified results:
[
  {
    "guid": "https://1337x.to/torrent/12345/",
    "title": "Now.You.See.Me.Now.You.Dont.2025.2160p.WEB-DL.DDP5.1.H265-BTM",
    "size": 10800000000,
    "seeders": 448,
    "indexer": "1337x"
  }
]

Step 7: Quality Filtering Radarr applies quality profile (ID 6):

{
  "name": "HD - 720p/1080p",
  "cutoff": 4,
  "upgradeAllowed": false,
  "items": [
    { "quality": "WEBDL-1080p", "allowed": true },
    { "quality": "Bluray-1080p", "allowed": true },
    { "quality": "WEBDL-720p", "allowed": true }
  ]
}
  • Filters out 2160p releases (too large)
  • Filters out CAM/TS quality (too low)
  • Selects highest-seeded 1080p WEB-DL

Step 8-9: Torrent Submission via VPN Radarr calls qBittorrent API:

POST /api/v2/torrents/add HTTP/1.1
Host: 172.18.0.10:8080
Content-Type: application/x-www-form-urlencoded

urls=magnet:?xt=urn:btih:abc123...&category=radarr&paused=false

qBittorrent:

  1. Resolves magnet link to torrent metadata via DHT
  2. Connects to peers through Gluetun's VPN tunnel (all TCP/UDP flows via tun0)
  3. Downloads pieces to /data/Downloads/radarr/

Step 10-12: File Import and Renaming qBittorrent webhooks Radarr on completion:

POST /api/v3/command HTTP/1.1
Host: radarr:7878

{
  "name": "DownloadedMoviesScan",
  "path": "/data/Downloads/radarr/"
}

Radarr:

  1. Scans download directory for completed files
  2. Verifies quality matches expected profile
  3. Atomic move to final location:
mv /data/Downloads/radarr/Now.You.See.Me.Now.You.Dont.2025.2160p.WEB-DL.DDP5.1.H265-BTM/movie.mkv \
   /data/Media/Movies/Now You See Me - Now You Don't (2025)/Now You See Me - Now You Don't (2025).mkv
  1. Deletes leftover files (NFO, samples)
  2. Updates Jellyseerr via webhook

Step 13-15: Plex Library Scan Plex monitors /data/Media/Movies/ for file changes (inotify):

inotify_add_watch(fd, "/data/Media/Movies/", IN_CREATE | IN_MOVED_TO);

On detecting new file:

  1. Runs Plex Media Scanner on the parent directory
  2. Extracts video metadata (codec, resolution, duration)
  3. Queries TMDB API for poster, synopsis, cast
  4. Generates video thumbnails and preview clips
  5. Adds to searchable library database

Step 16: User Streams via Plex User opens Plex client, which:

  1. Fetches library updates via /library/sections/1/all
  2. Displays movie with metadata
  3. On playback, Plex transcodes if needed:
# Plex Transcoder command (simplified)
/usr/lib/plexmediaserver/Plex\ Transcoder \
  -i "/data/Media/Movies/Now You See Me - Now You Don't (2025)/movie.mkv" \
  -codec:v:0 h264 -profile:v:0 high -level:v:0 4.1 \
  -maxrate:v:0 4000k -bufsize:v:0 8000k \
  -f hls /transcode/session-abc123/stream.m3u8
  1. Streams via HLS over HTTP

6. Storage Architecture

The storage layer uses Docker bind mounts to map host directories into containers, enabling shared access to media files.

Volume Mount Strategy

Host: /volume1/plex-infra/
├── Downloads/                    # qBittorrent download directory
│   ├── radarr/                   # Movie downloads (managed by Radarr)
│   └── sonarr/                   # TV downloads (managed by Sonarr)
├── Media/                        # Final media library
│   ├── Movies/                   # Plex Movies library
│   │   └── [Movie Title (Year)]/ # Per-movie folders
│   │       └── [Movie Title (Year)].mkv
│   └── TV/                       # Plex TV library
│       └── [Show Title]/         # Per-show folders
│           └── Season XX/
│               └── [Show] - sXXeYY - [Episode].mkv
└── config/                       # Application data
    ├── gluetun/
    ├── prowlarr/
    ├── radarr/
    └── sonarr/

Container perspective:

volumes:
  - /volume1/plex-infra:/data        # Radarr, Sonarr, qBittorrent
  - ./config/radarr:/config          # Radarr config

Inside Radarr container:

  • /data/Downloads/ is where qBittorrent writes completed downloads
  • /data/Media/Movies/ is the final import destination
  • No mv across filesystems (same mount point = atomic move)

Atomic Move Operation

Why atomic moves matter: Traditional copy-then-delete is non-atomic:

# Non-atomic (BAD)
cp /data/Downloads/movie.mkv /data/Media/Movies/movie.mkv  # Slow, I/O heavy
rm /data/Downloads/movie.mkv                                # Leaves window where file exists twice

Atomic move (same filesystem):

# Atomic move (GOOD)
mv /data/Downloads/movie.mkv /data/Media/Movies/movie.mkv

How it works:

  1. Kernel updates inode's parent directory pointer
  2. No data blocks are copied (instant operation)
  3. No window where file is partially written
  4. Plex sees complete file immediately (no partial scans)

Implementation in Radarr: Radarr uses rename(2) syscall:

rename("/data/Downloads/radarr/movie.mkv", "/data/Media/Movies/Title (Year)/movie.mkv");

If source and destination are on different filesystems, Radarr falls back to copy-then-delete.

File Permissions and Ownership

Containers run as non-root user via PUID and PGID:

PUID=1000  # User 'joey'
PGID=10    # Group 'admin'

Inside containers:

# Container sees files owned by UID 1000
$ ls -l /data/Media/Movies/
drwxr-xr-x joey admin 4096 Dec 28 03:00 Now You See Me (2025)/

On host:

# Host sees same files
$ ls -l /volume1/plex-infra/Media/Movies/
drwxr-xr-x joey admin 4096 Dec 28 03:00 Now You See Me (2025)/

Why this matters:

  • qBittorrent writes files as UID 1000
  • Radarr moves files as UID 1000 (no permission errors)
  • Plex reads files as UID 1000 (no access denied)
  • All services share the same user context

I/O Optimization Considerations

Direct I/O for large files: Plex and qBittorrent can benefit from O_DIRECT flag to bypass page cache for multi-GB files:

int fd = open("/data/Media/movie.mkv", O_RDONLY | O_DIRECT);
  • Reduces memory pressure on NAS
  • Prevents cache pollution from streaming workloads
  • Requires aligned reads (512-byte or 4K blocks)

XFS allocation groups (if using XFS):

mkfs.xfs -d agcount=32 /dev/sda1
  • Spreads metadata across 32 allocation groups
  • Enables parallel I/O for concurrent downloads
  • Reduces lock contention in directory operations

7. Security Posture

Network Segmentation

Isolation via Docker networks:

networks:
  media-net:
    driver: bridge
    ipam:
      config:
        - subnet: 172.18.0.0/16

Attack surface reduction:

  • Containers on media-net cannot directly reach host network
  • Inter-container traffic flows through Docker proxy (inspectable via tcpdump)
  • Containers on different networks cannot communicate (Docker firewall rules)

Example iptables rules (auto-generated by Docker):

iptables -A DOCKER-ISOLATION-STAGE-1 \
  -i br-media -o docker0 -j DROP

This prevents containers on media-net from reaching containers on default bridge (docker0).

VPN Kill Switch Mechanism

Defense in depth:

  1. Gluetun iptables rules - Block non-VPN traffic at container level
  2. Docker network isolation - qBittorrent has no route to clearnet
  3. Health check termination - If VPN fails, Docker restarts container (stops torrents)

Leak test:

# Inside qBittorrent container
curl ifconfig.me
# Returns VPN exit node IP, never host IP

API Key Rotation and Management

Current implementation:

RADARR_API_KEY=8354f94ef52e43939315654892023cc5
  • Stored in .env file (plain text)
  • Mounted as environment variables

Production improvements:

  • Use Docker secrets (encrypted at rest):
secrets:
  radarr_api_key:
    external: true

services:
  radarr:
    secrets:
      - radarr_api_key
  • Rotate keys quarterly
  • Scope keys per service (Jellyseerr gets read-only Radarr key)

Container Isolation Best Practices

Read-only root filesystem:

services:
  prowlarr:
    read_only: true
    tmpfs:
      - /tmp
  • Prevents malware from persisting in container filesystem
  • Forces all writes to volumes (easier to audit)

Capability dropping:

services:
  prowlarr:
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - SETGID
      - SETUID
  • Drops all Linux capabilities by default
  • Only grants necessary caps (user switching)

AppArmor/SELinux profiles:

services:
  qbittorrent:
    security_opt:
      - apparmor=docker-default
  • Restricts syscalls (e.g., block mount, reboot)
  • Prevents container escape exploits

8. Resource Management

CPU and Memory Limits Rationale

services:
  gluetun:
    cpus: '0.5'
    mem_limit: 1g

Why limit resources?

  1. Prevent noisy neighbor - qBittorrent can't starve Plex during heavy downloads
  2. Predictable performance - Sonarr API latency stays consistent
  3. OOM protection - Container dies instead of crashing entire NAS

Limit sizing logic:

Service CPU Memory Justification
Gluetun 0.5 1 GB VPN encryption is CPU-bound, minimal memory
qBittorrent 2.0 4 GB Handles 100+ torrents, large peer tables
Prowlarr 1.0 1 GB I/O-bound (SQLite), infrequent queries
Sonarr 1.5 2 GB Episode matching is CPU-heavy, large DB
Radarr 1.5 2 GB Same as Sonarr
Jellyseerr 1.0 1 GB Lightweight Node.js app
Plex Transcoding needs all available resources

Plex transcoding bottleneck:

# 4K → 1080p transcode can use 8+ CPU cores
Plex Transcoder:
  CPU: 800% (8 cores)
  Memory: 6 GB (buffer + decoder state)

Plex intentionally has no limits to prioritize user experience.

Resource Contention Prevention

Docker CPU scheduler (CFS):

cpus: '1.5'

Translates to:

cpu.cfs_quota_us = 150000   # 1.5 * 100ms period
cpu.cfs_period_us = 100000
  • Container gets max 150ms of CPU time per 100ms period
  • Excess cycles go to other containers

Memory cgroup limits:

mem_limit: 2g

Triggers OOM killer when RSS exceeds 2 GB:

oom_kill_process(radarr) -> SIGKILL

Docker restarts container automatically.

Monitoring and Observability Hooks

Container metrics (Prometheus exporter):

services:
  cadvisor:
    image: gcr.io/cadvisor/cadvisor
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    ports:
      - "8081:8080"

Exposes metrics:

container_cpu_usage_seconds_total{name="radarr"} 1234.56
container_memory_usage_bytes{name="radarr"} 512000000

Application metrics:

  • Radarr/Sonarr expose Prometheus endpoint on /metrics
  • qBittorrent logs JSON events to stdout (parseable by Loki)
  • Gluetun logs VPN reconnect events

Alerting thresholds:

# Alert if qBittorrent uses >90% of 4GB limit
- alert: QBittorrentHighMemory
  expr: container_memory_usage_bytes{name="qbittorrent"} > 3600000000
  for: 5m

9. Operational Patterns

Health Check Design

Best practices implemented:

  1. Fast failure detection:
interval: 30s
timeout: 10s
retries: 5
  • Check every 30s
  • Fail if no response in 10s
  • Restart after 5 consecutive failures (2.5 minutes)
  1. Startup grace period:
start_period: 120s
  • Don't count failures in first 2 minutes
  • Allows slow initialization (DB migrations)
  1. Lightweight checks:
curl -f http://localhost:9696/ping
  • /ping returns 200 OK without DB query
  • Avoids false failures during high load

Automated Recovery Mechanisms

Docker restart policies:

restart: unless-stopped
  • Restart on crash (exit code ≠ 0)
  • Restart on OOM kill
  • Don't restart if manually stopped (docker stop)

Exponential backoff: Docker waits progressively longer between restarts:

Attempt 1: Immediate
Attempt 2: 1 second
Attempt 3: 2 seconds
Attempt 4: 4 seconds
...
Attempt N: min(2^N seconds, 1 minute)

Cascading recovery: If Gluetun crashes:

  1. qBittorrent health check fails (can't reach localhost:8080)
  2. Docker restarts qBittorrent
  3. qBittorrent waits for Gluetun to become healthy
  4. Pipeline resumes

Management Scripts and Automation

Stack lifecycle:

# manage-stack.sh
./manage-stack.sh start     # docker-compose up -d
./manage-stack.sh stop      # docker-compose stop
./manage-stack.sh restart   # docker-compose restart
./manage-stack.sh logs radarr  # docker-compose logs -f radarr

Health monitoring:

# healthcheck-services.sh
#!/bin/bash
for service in gluetun prowlarr sonarr radarr jellyseerr; do
  health=$(docker inspect --format='{{.State.Health.Status}}' $service)
  echo "$service: $health"
done

Failed torrent cleanup:

# cleanup-failed-torrents.sh
curl -X POST http://localhost:8080/api/v2/torrents/delete \
  -d 'hashes=...' -d 'deleteFiles=true'

Production Maintenance Workflows

Update procedure:

# 1. Pull latest images
docker-compose pull

# 2. Recreate containers (zero-downtime for stateless services)
docker-compose up -d

# 3. Verify health
./healthcheck-services.sh

Backup strategy:

# Backup application configs (SQLite DBs)
tar -czf backup-$(date +%Y%m%d).tar.gz ./config/

# Media files are immutable (no backup needed, re-download if lost)

Log rotation:

services:
  radarr:
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"
  • Keep max 3 files × 10 MB = 30 MB per service
  • Prevents /var/lib/docker from filling disk

Appendix: Quick Reference

Service Endpoints

Service URL Purpose
qBittorrent http://nas-ip:8080 Torrent management
Prowlarr http://nas-ip:9696 Indexer config
Sonarr http://nas-ip:8989 TV automation
Radarr http://nas-ip:7878 Movie automation
Jellyseerr http://nas-ip:5055 User requests
Plex http://nas-ip:32400/web Media streaming

API Authentication

All services use API keys in HTTP headers:

curl http://radarr:7878/api/v3/movie \
  -H "X-Api-Key: 8354f94ef52e43939315654892023cc5"

Common Troubleshooting Commands

# Check VPN status
docker exec gluetun curl ifconfig.me

# View Radarr logs
docker-compose logs -f --tail 100 radarr

# Force library scan
curl -X POST http://localhost:32400/library/sections/1/refresh?X-Plex-Token=...

# Restart failed service
docker-compose restart sonarr

End of Technical Deep Dive

This architecture demonstrates production-ready patterns for containerized media automation: defense-in-depth security with VPN isolation, health-based orchestration for reliable startup, atomic file operations for data integrity, and resource limits for predictable performance. The design prioritizes operational simplicity while maintaining security and reliability.


10. Quality Management and Size Filtering

One of the most critical aspects of automated media acquisition is quality profile configuration. Without proper filtering, the automation system will download massive 25GB+ Remux files that fill disk space and provide marginal quality improvements over compressed releases.

The Problem: Format-Only Filtering

Default Radarr configuration (BAD):

{
  "name": "HD - 720p/1080p",
  "items": [
    { "quality": "WEBDL-1080p", "allowed": true },
    { "quality": "Bluray-1080p", "allowed": true },
    { "quality": "WEBDL-720p", "allowed": true }
  ]
}

Issues with this approach:

  • Limited format options - Only WEB-DL and Bluray sources
  • No size protection - Can download 20GB+ 1080p Bluray Remux
  • Misses good releases - Ignores HDTV and WEBRip (often smaller, similar quality)
  • Format-centric thinking - Focuses on source instead of outcome (file size + quality)

Real-world consequence: A search for "Epic Movie (2007)" returned a 13.8 GB Remux-1080p file when a 2GB Bluray-1080p would have been nearly identical quality.

The Solution: Size-Based Quality Definitions

Radarr supports per-quality size limits via the /api/v3/qualitydefinition endpoint. These limits are specified in MB per minute of runtime, then multiplied by movie length.

Optimal configuration:

Quality Max Size (MB/min) Typical 2hr Movie Use Case
WEBDL-720p 42 ~5 GB Fast downloads, good quality
WEBRip-720p 42 ~5 GB Scene releases, wide availability
Bluray-720p 42 ~5 GB Best 720p quality
HDTV-720p 42 ~5 GB TV broadcasts, timely releases
WEBDL-1080p 68 ~8 GB Streaming service rips
WEBRip-1080p 68 ~8 GB Scene releases
Bluray-1080p 68 ~8 GB Best 1080p quality
HDTV-1080p 68 ~8 GB TV broadcasts
Remux-1080p 0 ❌ BLOCKED Uncompressed (15-40GB)
Bluray-2160p 0 ❌ BLOCKED 4K content (30-80GB)

How Radarr calculates limits:

movie_runtime_minutes = 120  # Example: 2-hour movie
quality_max_mb_per_min = 68  # WEBDL-1080p
max_file_size = movie_runtime_minutes * quality_max_mb_per_min
# Result: 120 * 68 = 8160 MB (~8 GB)

A 90-minute comedy maxes out at 6.1 GB, while a 180-minute epic can reach 12.2 GB (both using the same profile).

Implementation via API

Step 1: Update Quality Definitions

curl -X PUT http://localhost:7878/api/v3/qualitydefinition/20 \
  -H "X-Api-Key: 8354f94ef52e43939315654892023cc5" \
  -H "Content-Type: application/json" \
  -d '{
    "quality": {"id": 3, "name": "WEBDL-1080p"},
    "title": "WEBDL-1080p",
    "weight": 18,
    "minSize": 0,
    "maxSize": 68,
    "preferredSize": 60
  }'

Step 2: Enable Additional Formats

curl -X PUT http://localhost:7878/api/v3/qualityprofile/6 \
  -H "X-Api-Key: 8354f94ef52e43939315654892023cc5" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "HD - 720p/1080p",
    "upgradeAllowed": false,
    "cutoff": 4,
    "items": [
      {"quality": {"id": 4, "name": "HDTV-720p"}, "allowed": true},
      {"quality": {"id": 5, "name": "WEBDL-720p"}, "allowed": true},
      {"quality": {"id": 14, "name": "WEBRip-720p"}, "allowed": true},
      {"quality": {"id": 6, "name": "Bluray-720p"}, "allowed": true},
      {"quality": {"id": 9, "name": "HDTV-1080p"}, "allowed": true},
      {"quality": {"id": 3, "name": "WEBDL-1080p"}, "allowed": true},
      {"quality": {"id": 15, "name": "WEBRip-1080p"}, "allowed": true},
      {"quality": {"id": 7, "name": "Bluray-1080p"}, "allowed": true},
      {"quality": {"id": 30, "name": "Remux-1080p"}, "allowed": false},
      {"quality": {"id": 19, "name": "Bluray-2160p"}, "allowed": false}
    ]
  }'

Why Expand Format Options?

Before (restrictive config):

Search for "The Matrix (1999)"
├─ WEBDL-1080p: 0 results (not on streaming)
├─ Bluray-1080p: 1 result (25 GB Remux - too large)
└─ WEBDL-720p: 0 results
❌ Result: No downloads (or accepts oversized Remux)

After (size-limited, multi-format):

Search for "The Matrix (1999)"
├─ WEBDL-1080p: 0 results (not on streaming)
├─ WEBRip-1080p: 12 results (5-7 GB, good seeding)
├─ Bluray-1080p: 18 results (4-8 GB, filtered by size limit)
├─ HDTV-1080p: 3 results (broadcast rip, 6 GB)
├─ WEBDL-720p: 8 results (3-4 GB)
└─ WEBRip-720p: 22 results (2-5 GB)
✅ Result: 63 potential matches, picks highest-seeded under 8 GB

Key insight: Format (WEB-DL vs WEBRip vs Bluray) matters less than:

  1. File size (avoid waste)
  2. Seeders (download speed)
  3. Resolution (720p vs 1080p)

By enabling all formats and using size limits, Radarr gets more options to find the best balance.

Release Scoring Logic

Radarr ranks releases by:

score = (
    quality_weight * 100 +           # 1080p > 720p
    preferred_word_bonus +           # "YIFY" or "Tigole" boost
    indexer_priority_bonus +         # Trusted indexers rank higher
    seeder_count_bonus               # More seeders = better
) - size_penalty                     # Closer to preferred size = higher score

Example calculation:

Release: The.Matrix.1999.1080p.BluRay.x265-PSA (6.8 GB, 450 seeders)

quality_weight = 1080 (Bluray-1080p)
preferred_word_bonus = 0
indexer_priority_bonus = 10
seeder_count_bonus = 45 (450 seeders / 10)
size_penalty = 2 (6.8 GB is close to preferred 7.2 GB)

Final score = (1080 * 100) + 0 + 10 + 45 - 2 = 108,053

Radarr downloads the highest-scoring release that passes all filters (quality profile + size limit).

Configuration File vs. API

Important: Quality profiles are stored in Radarr's SQLite database (radarr.db), not in XML config files.

# Radarr config structure
config/radarr/
├── config.xml                 # API keys, ports, auth settings
├── radarr.db                  # Quality profiles, movies, indexers
├── logs/                      # Application logs
└── Backups/                   # Auto-generated DB backups

Why database instead of files?

  • Atomic updates - SQLite transactions prevent corruption
  • Relational integrity - Movies reference quality profiles by ID
  • Migration support - Schema upgrades via ALTER TABLE
  • Backup-friendly - Single .db file contains all settings

Implication for version control:

  • Don't commit radarr.db to git (contains secrets + binary format)
  • Do commit setup scripts that configure Radarr via API
  • Do commit documentation of recommended quality profiles

Automated Configuration Script

For reproducible deployments:

#!/usr/bin/env python3
import requests

API_KEY = "8354f94ef52e43939315654892023cc5"
BASE_URL = "http://localhost:7878/api/v3"
HEADERS = {"X-Api-Key": API_KEY}

# 1. Set size limits for all HD qualities
quality_limits = {
    4: 42,   # HDTV-720p
    5: 42,   # WEBDL-720p
    14: 42,  # WEBRip-720p
    6: 42,   # Bluray-720p
    9: 68,   # HDTV-1080p
    3: 68,   # WEBDL-1080p
    15: 68,  # WEBRip-1080p
    7: 68,   # Bluray-1080p
    30: 0,   # Remux-1080p (blocked)
    19: 0,   # Bluray-2160p (blocked)
}

for quality_id, max_mb_per_min in quality_limits.items():
    # Fetch current definition
    resp = requests.get(f"{BASE_URL}/qualitydefinition", headers=HEADERS)
    quality_defs = [q for q in resp.json() if q['quality']['id'] == quality_id]
    
    if quality_defs:
        qd = quality_defs[0]
        qd['maxSize'] = max_mb_per_min
        qd['preferredSize'] = max_mb_per_min - 8
        
        # Update definition
        requests.put(
            f"{BASE_URL}/qualitydefinition/{qd['id']}",
            headers=HEADERS,
            json=qd
        )
        print(f"✓ Set {qd['quality']['name']} max: {max_mb_per_min} MB/min")

# 2. Update HD profile to allow all formats
profile_id = 6  # "HD - 720p/1080p"
resp = requests.get(f"{BASE_URL}/qualityprofile/{profile_id}", headers=HEADERS)
profile = resp.json()

# Enable all HD formats, disable Remux/4K
for item in profile['items']:
    if 'quality' in item:
        quality_id = item['quality']['id']
        item['allowed'] = quality_id not in [30, 31, 19, 16]  # Block Remux/4K
    elif 'items' in item:  # Grouped qualities (e.g., "WEB 720p")
        for sub in item['items']:
            sub['allowed'] = sub['quality']['id'] not in [30, 31, 19, 16]
        item['allowed'] = any(s['allowed'] for s in item['items'])

requests.put(f"{BASE_URL}/qualityprofile/{profile_id}", headers=HEADERS, json=profile)
print(f"✓ Updated profile: {profile['name']}")

Usage:

./setup-radarr-quality.py
# ✓ Set HDTV-720p max: 42 MB/min
# ✓ Set WEBDL-1080p max: 68 MB/min
# ...
# ✓ Updated profile: HD - 720p/1080p

Real-World Results

Test case: Triggering a search for 101 missing movies after configuration changes.

Before (format-restricted, no size limits):

Movies found: 1
Queue status: 1 downloading
Issues: Many movies had no matches (format too restrictive)

After (multi-format, 8GB size limit):

Movies found: 19
Queue status: 10 downloading, 9 queued
File sizes: 0.8 GB - 7.8 GB (all under limit except 1 legacy Remux)
Formats: HDTV-1080p, WEBRip-1080p, Bluray-1080p, WEBDL-720p

Improvement:

  • 19x more results from expanding format options
  • 100% under size limit for new downloads
  • Faster downloads due to higher seeder counts on popular formats

Sonarr Quality Configuration (TV Shows)

Sonarr uses the same quality definition system, but with different size expectations (per-episode vs per-movie).

Current Sonarr config:

HDTV-720p:     Max 4.88 GB/episode (~5 GB for 40-min show)
WEBDL-720p:    Max 5.08 GB/episode
WEBRip-720p:   Max 5.08 GB/episode
Bluray-720p:   Max 5.08 GB/episode
HDTV-1080p:    Max 4.88 GB/episode
WEBDL-1080p:   Max 5.08 GB/episode
WEBRip-1080p:  Max 5.08 GB/episode
Bluray-1080p:  Max 6.05 GB/episode

Assessment:Already optimized - Sonarr has:

  • All major HD formats enabled
  • Reasonable per-episode size limits (5-6 GB)
  • Remux formats blocked

No changes needed for TV automation.

Key Takeaways

  1. Size limits > Format restrictions

    • Limit by file size (8GB for movies, 5GB for episodes)
    • Enable all reasonable formats (HDTV, WEB-DL, WEBRip, Bluray)
    • Block only wasteful formats (Remux, 4K)
  2. Quality profiles are in the database

    • radarr.db contains all quality settings
    • Use API for configuration changes
    • Backup .db file, don't commit to git
  3. Automation finds more matches

    • Multi-format support → 19x more results
    • Size limits → No disk waste
    • Seeder-based selection → Faster downloads
  4. Configuration is reproducible

    • API-based setup scripts
    • Documented quality profiles
    • Automated deployment for new instances
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment