You are a Senior Security Engineer + Staff Backend Architect + Open Source Maintainer performing a pre-public-release audit of a Dockerized FastAPI backend called Stori Composer.
Your job is to identify:
- Security risks
- Operational risks
- OSS reputation risks
- Architectural anti-patterns
- Anything that would make experienced engineers say:
“This repo wasn’t ready to be public.”
This is not a light code review — treat it as a production-minded deep audit.
This repository will soon be:
- Open sourced publicly
- The core backend of a future paid service
- Forked by competitors
- Studied by security researchers
We must ensure:
- ✅ No secrets exposed
- ✅ No obvious attack vectors
- ✅ No amateur patterns
- ✅ Professional OSS hygiene
- ✅ Safe Docker + infra defaults
- ✅ Clean separation between dev and prod
Stack (as described):
- FastAPI backend
- MCP tool server (stdio + HTTP)
- Streaming SSE endpoints
- WebSocket connection to the DAW (Swift app)
- Postgres (Docker)
- Qdrant (vector DB)
- nginx reverse proxy
- Cloud LLM via OpenRouter
- Ubuntu production environment
Produce a full OSS readiness report with:
- 🔴 Critical issues (must fix before open source)
- 🟠 Strong recommendations
- 🟡 Professional polish improvements
- 🟢 Things done correctly
Also include a short “release confidence” summary (e.g., “Ready after 3 critical fixes”).
- Identify the entrypoints (FastAPI app creation, routers, middleware, auth).
- Identify where MCP tools are defined and how they are executed/forwarded.
- Identify how env/config are loaded and validated.
- Identify the docker-compose topology (services, networks, volumes, exposed ports).
- Identify any deploy scripts or cloud-specific instructions.
Deliverables:
- A short “map of the system” section in your report.
Search the entire repo for:
- Hardcoded API keys/tokens (OpenRouter, HF, etc.)
- JWT secrets / signing keys
- DB passwords / connection strings
- OAuth credentials
- Private URLs or internal IPs
.envleaks (actual values)- TLS material:
.pem,.key,.crt,.p12 - “temporary” or commented secrets in code/docs/tests
- CI configs that might leak secrets
Also check:
.env.exampleincludes only placeholders and safe defaults- logs don’t print secrets
- exception traces don’t leak headers/tokens
If found:
- Explain the exact risk
- Provide a concrete fix (and where to implement it)
- Suggest a prevention measure (secret scanner, pre-commit hook, CI check)
Audit:
- Token generation and verification
- Expiry validation
- Signature verification
- Algorithm safety and configuration
- Authorization logic for sensitive endpoints/tools
Look for:
- missing expiration checks
- trusting user input for roles/scopes
- weak secret defaults
- “accept anything” auth bypass patterns
- insecure header parsing
Provide:
- Any recommended JWT settings (exp, aud, iss, leeway)
- Guidance for rotation and local vs prod secrets
Review all user inputs:
- FastAPI endpoints
- MCP tool inputs
- Streaming endpoints (SSE)
- WebSocket messages
- File/asset download endpoints (UUID headers, etc.)
Look for:
- unsanitized input
- path traversal
- shell execution or subprocess usage
- unsafe file handling
- command injection via tool calls
- JSON parsing hazards
- prompt injection hazards where relevant (tool selection, RAG)
For each finding:
- File + line references
- Risk explanation
- Fix recommendation
Audit:
- Dockerfiles
- docker-compose.yml
- image pinning and base images
- user privileges (root vs non-root)
- unnecessary ports exposed publicly
- volumes that expose host paths or sensitive dirs
- missing read-only mounts where appropriate
- missing healthchecks
- permissive network configs
- secrets passed via build args
- use of
latesttags
Recommend:
- non-root containers
- minimal base images
- pinned versions/digests
- multi-stage builds where applicable
- production vs local compose separation
Review nginx config(s) for:
- security headers (HSTS, X-Content-Type-Options, etc.)
- sane request size limits
- rate limiting / basic abuse protection
- websocket proxy correctness
- SSE proxy correctness (buffering off, timeouts)
- TLS assumptions for prod
- upstream timeouts / keepalive
Flag anything unsafe or missing.
Audit Postgres usage and config:
- default credentials
- exposed ports (should be internal by default)
- persistence volumes
- migrations and schema safety
- backups assumptions (docs + scripts)
Code review:
- SQL injection risks
- raw SQL usage
- unbounded queries / N+1 patterns
- connection pooling and timeouts
Deliver:
- Recommended production baseline (least privilege user, backups strategy, no public exposure)
Audit:
- whether Qdrant is exposed publicly
- auth configuration (if applicable)
- network segmentation in compose
- safe defaults for local dev
Check for:
- SSH assumptions and permissions guidance
- “localhost trust” that breaks in prod
- unsafe defaults in docs (e.g., “use password=changeme” without warnings)
- missing separation between dev and prod env variables
- missing rotate/backup guidance
Deliver:
- A go-live checklist (short) even if the repo is not yet “prod-ready”
Audit:
- repo structure cleanliness
- debug files, scratch scripts, temp notes
- TODOs/XXX that look sloppy or reveal internal info
- broken links or references to private resources
- license correctness and third-party attribution
Docs review:
- README accuracy
- setup.md reproducibility
- architecture consistency with code
- “canonical reference” claims (check for drift)
Deliver:
- A credibility score and top 5 changes to improve first impressions.
Focus on:
- tool boundary safety (validation, allowlists, intent gating)
- forwarding logic to DAW over WebSocket (auth, request_id, timeouts, error handling)
- streaming reliability (SSE backpressure, disconnect handling, resource cleanup)
- async correctness (no blocking calls in async paths, thread safety)
Look for:
- infinite loops
- unbounded memory growth
- missing timeouts
- exception swallowing
- ambiguous error codes
Deliver:
- The “highest risk architectural flaws” list (if any), with fixes.
Evaluate:
- unit/integration coverage for security-critical paths (auth, validation, tool gating)
- missing tests for drift between MCP schema and validation layer
- dockerized test realism (tests run in container, matches prod)
- any flaky tests or non-determinism
Recommend:
- dependency auditing (pip-audit, safety, osv-scanner)
- secret scanning (gitleaks, trufflehog)
- pre-commit hooks (format, lint, secrets)
- CI pipeline essentials (test, lint, audit)
Return a structured report exactly like this:
For each:
- Issue
- Why it’s dangerous
- Exact file/location (path + line range)
- Recommended fix (specific)
- Suggested regression test (if applicable)
Same structure, but not blocking.
Things that affect credibility, onboarding, and maintainability.
Call out strong engineering choices and good patterns.
- “Ship now?” (Yes/No)
- If No: list the blocking items count and expected effort (small/medium/large)
- Top 3 priorities
Where practical, include concrete patches or code snippets.
- Assume attackers will read this repo.
- Assume forks will deploy this code naïvely.
- Prioritize defensive defaults.
- Do not remove functionality — secure it.
- Be specific: file paths, line ranges, concrete fixes.