Before starting, check the git history to determine if this is a follow-up review:
git log --oneline -10 | grep -i "Co-Authored-By: Claude"- First pass: No recent Claude co-authored commits on this branch, or the Claude commits are from a different feature.
- Follow-up pass: Recent Claude co-authored commits exist from a previous
/final-reviewrun on this same feature.
If this is a follow-up pass:
- Note this in the summary as "Review Pass #2" (or #3, etc.)
- Tell the review agents to check git history to understand WHY recent changes were made before suggesting reversals
- Be more conservative with changes - the previous pass already applied significant improvements
- Focus agents on catching issues introduced BY the previous review, not re-litigating decisions already made
First, check which branch you're on:
- If on
main: Create a new feature branch with a descriptive name based on the changes (e.g.,feature/add-user-metrics,fix/dashboard-loading), then commit the changes to that branch. - If already on a feature branch: Continue with existing branch.
Then handle the PR:
- If a PR doesn't exist for this branch, create one with a clear title and description summarizing the changes.
- If a PR already exists, push any uncommitted changes to it.
Use the Task tool to launch these three agents simultaneously.
Important context for all agents: If this is a follow-up pass, include in each agent's prompt:
- "Check git log to see recent commits and their messages before making recommendations"
- "If a pattern looks intentional based on recent commit messages, don't recommend reversing it without strong justification"
- "Focus on issues that may have been INTRODUCED by recent changes, not re-reviewing the entire file"
Review the code changes with these focuses:
- Are we duplicating logic that already exists elsewhere in the codebase? Search for similar patterns, helper methods, or services that we should be using instead.
- Are there other places in the codebase with similar situations where this same logic/fix should be applied? We don't want inconsistency.
- Check for opportunities to consolidate with existing utilities, concerns, or services.
Review the code changes through the lens of Uncle Bob's Clean Code principles:
- Single Responsibility: Are classes/methods doing one thing?
- Open/Closed: Can we extend without modifying?
- Look for opportunities to replace conditionals with polymorphism or strategy patterns
- Identify deeply nested if statements that could be flattened or extracted
- Flag long methods that should be decomposed
- Check for proper abstraction levels
Review for overly defensive code that could hide real issues:
- Rescue blocks that swallow exceptions silently
- Fallback values that mask nil errors we'd want to know about
- Safe navigation (
&.) chains that hide broken assumptions - Empty array/hash fallbacks that hide missing data
- Conditional checks that prevent useful error logs from being raised
- Any pattern that would make debugging harder in production
When the three agents return their recommendations:
-
Apply most recommendations - If you're on the fence, do it. This is a single-developer repo so "out of scope" doesn't apply.
-
Handle conflicts intelligently - If Agent 1 says "use existing method X" and Agent 2 says "extract to new method Y", prefer using existing code (Agent 1) to keep the codebase DRY.
-
Track what you skip - Only skip if you're genuinely confident it's wrong for this codebase. Note these for the summary.
-
On follow-up passes, aim for convergence - If agents are only finding minor issues or suggesting stylistic preferences, note this in the summary. The goal is to converge, not to endlessly refactor. If changes from this pass are minimal, recommend that the user proceed without another review pass.
Run ALL of these that apply to the changes:
ruby -c path/to/changed/files.rb # Syntax check all changed Ruby files
rails test test/path/to/relevant/tests # Related test files
PARALLEL_WORKERS=11 bin/rails test 2>&1 > ./tmp/test-results.log # Full suitenpx eslint path/to/changed/files.jsx # Lint changed JS files
npm test # Jest suiteUse test accounts to verify data contracts with real data. Define these in your CLAUDE.md:
- A default test account for most scenarios
- Accounts that exercise different feature flags or integrations
- Accounts with realistic team sizes
Run rails runner scripts to verify services return expected shapes.
Based on the code changes, identify which pages/features are affected. Then:
- List out each manual smoke test you would normally ask me to do
- Use the Chrome automation tools to test each one yourself
- Login as
[email protected]/thisisnotreallyapassword - Navigate to affected pages on port 3100
- Verify UI renders correctly, interactions work, data displays properly
After all fixes and tests pass, commit and push the changes to the PR.
Provide a summary with these sections:
- State which pass this is (e.g., "Review Pass #1" or "Review Pass #2")
- If follow-up pass, briefly note what the previous pass addressed
- List the recommendations you implemented from each agent
- For each skipped item, explain WHY you decided not to do it
- Remember: "out of scope" is not a valid excuse in a single-developer repo
- Which tests passed
- What was verified via rails runner
- What was smoke tested via Chrome
- List anything that couldn't be tested and why
- Explain what you'd want me to manually verify
- If this pass made substantial changes (new classes, significant refactoring), recommend running
/final-reviewagain - If changes were minor (small tweaks, style fixes), recommend proceeding to merge
- Be honest: "This pass was substantial - I'd recommend one more review" or "Changes were minimal - ready to merge"