Between January 9 and January 30, 2026, our AI-assisted trading system had seven significant incidents. The same root patterns — silent fallbacks, unverified deployments, training/inference mismatches — kept recurring despite post-mortems and a growing LESSONS_LEARNED.md. The lessons existed. The enforcement mechanism didn't.
This post is about the documentation system that emerged from those failures. Not documentation in the wiki sense — a self-reinforcing system where every incident writes the rules that prevent the next one, and those rules are embedded in the tools that AI coding agents read at the start of every session.
Post 5 ended with a tease: "I'll go deeper on the documentation system in the next post, including why we built it, how it prevents the kind of knowledge loss that contributed to our $78K incident, and how it works with AI coding agents." Here's the full story.



