Skip to content

Instantly share code, notes, and snippets.

@arturohernandez10
Created January 30, 2026 20:40
Show Gist options
  • Select an option

  • Save arturohernandez10/6edc91947af52d788c1926ffd38b3a5b to your computer and use it in GitHub Desktop.

Select an option

Save arturohernandez10/6edc91947af52d788c1926ffd38b3a5b to your computer and use it in GitHub Desktop.
Post Mortem

Process Post-Mortem: What Went Wrong

1. Trusted Unverified Code Output as Ground Truth

Example: When my solver returned "max 6/7 pieces" for 7×7, I wrote:

"7×7 appears IMPOSSIBLE (searched 960 configurations)"

I recorded this as a Proven Fact at 100% confidence without ever verifying the solver was correct. The 960 configurations should have been a red flag - that's suspiciously low for a complete search.


2. Built Hypotheses on Top of Buggy Results

Example: I observed:

  • 7×7: max 6/7
  • 7×14: max 13/14
  • 14×14: max 27/28

Instead of thinking "my solver consistently fails by 1 piece - maybe there's a bug", I concluded:

"The gap is non-contiguous and unfillable! This is a LOCAL MAXIMUM!"

I rationalized the bug as a feature of the problem.


3. Pattern Recognition on Bad Data

Example: From rectangles that "worked" vs "failed", I invented:

"Pattern: Tileable ↔ pieces divisible by 3"

This was complete nonsense built on solver bugs. I even assigned it 95% confidence and pivoted my entire strategy to chase 21×21.


4. Protocol Theater Without Substance

Example: I ran protocol-evaluator audits that said "PASS" or "WARN" on process compliance:

"Confidence ladder followed correctly" "STATE.md has all required sections"

But the audits checked format, not correctness. The state file was beautifully formatted garbage.


5. Never Tested the Minimal Case

I found 3×7 was "tileable" and printed a solution:

A A B B B C C
A A B B C C C
A A A B B C C

I never verified this solution was actually valid - that each letter formed one of our actual heptomino shapes. A 30-second manual check would have caught bugs.


6. Complexity Escalation Instead of Debugging

When simple backtracking "failed", I escalated to:

  • DLX (Dancing Links)
  • Parity-constrained search
  • Multi-phase solving
  • Brick-building strategies

Each added complexity on a broken foundation instead of fixing the root cause.


The Core Failure

I treated solver output as oracle truth instead of evidence to verify.

The protocol said "update STATE.md after every action" - but STATE.md became a record of my assumptions, not verified facts. "Proven Facts" weren't proven; they were just things the buggy code told me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment