This workflow applies to ANY project and ensures high-quality, reviewable code that passes all project-specific quality checks.
At the start of EVERY session:
- Read the project's
CLAUDE.mdor equivalent AI instructions file - Check for project-specific instruction files (
.github/instructions/,docs/, etc.) - Identify the project's:
- Linter command
- Formatter command
- Type checker command (if applicable)
- Test command
- Comprehensive quality check command
- Similar component/code patterns to study
Before touching any code:
- Capture the feature goal, risks, and acceptance criteria
- Review ALL project-specific instruction files
- Research similar components/patterns in the codebase:
- Find files that do something similar to your feature
- Study their structure, naming, and patterns
- Plan to match existing patterns EXACTLY (spacing, naming, architecture)
- Confirm scope and success metrics
- Document implementation approach and acceptance criteria
Key Questions:
- What existing code does something similar?
- What patterns does this project use?
- What quality checks does this project require?
For each code change:
- Make focused changes with small, reviewable commits
- Follow the project's established architecture patterns
- After editing ANY file, IMMEDIATELY run:
- Project's formatter (e.g.,
prettier,black,gofmt) - Project's linter with auto-fix (e.g.,
eslint --fix,rubocop -a)
- Project's formatter (e.g.,
- Test changes as you go (simulator/browser/CLI - don't wait until the end)
Pattern Matching:
- Use the same spacing/indentation as existing code
- Use the same naming conventions
- Use the same component/module structure
- Use the same styling/class patterns
Follow the project's testing philosophy:
- Check project's testing instructions/guidelines
- Determine what SHOULD be tested:
- Always: Business logic, utilities, critical paths
- Sometimes: Complex UI interactions, edge cases
- Never: Simple components, third-party libraries, trivial code
- Follow project's test structure and naming
- Prefer test-driven development when appropriate
Common philosophies:
- TDD Projects: Write failing test first, then implementation
- Utility-focused: 100% coverage for utilities, minimal for UI
- Integration-focused: Test user flows, not implementation details
Execute ALL quality checks until they pass:
- Project's type checker (if applicable):
- TypeScript:
tsc,pnpm typecheck - Python:
mypy - Go: Built into compiler
- TypeScript:
- Project's linter:
- JavaScript/TypeScript:
eslint - Python:
pylint,flake8,ruff - Ruby:
rubocop - Go:
golangci-lint
- JavaScript/TypeScript:
- Project's test suite:
- Run full suite or affected tests
- Ensure 100% pass rate
- Project's comprehensive check (if exists):
- Often named:
check,verify,ci,test:all
- Often named:
- Manual testing in target environment:
- Web: Browser testing
- Mobile: Simulator/device testing
- CLI: Command-line testing
- API: Endpoint testing
- Iterate on code and tests until ALL checks pass cleanly
- Address any feedback from automated tools
- Keep commits tidy (amend/squash if needed before PR)
- Ensure manual testing shows expected behavior
Before committing, verify:
- All project instruction files reviewed and patterns followed
- Similar components studied and patterns matched exactly
- Formatter run on all modified files
- Linter run with auto-fix on all modified files
- Type checker passes (if applicable)
- All tests pass (or document why tests aren't needed)
- Feature tested in target environment and works as expected
- Only relevant files staged (
git statusis clean) - No unrelated changes included
-
Stage only relevant files:
git add path/to/file1 path/to/file2
-
Commit with conventional format + Co-Authored-By:
git commit -m "$(cat <<'EOF' type(scope): brief description Longer description explaining the change and why it was needed. Changes: - Specific change 1 - Specific change 2 Relates to: [ISSUE-ID] 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]> EOF )"
Common commit types:
feat: New featurefix: Bug fixrefactor: Code restructuring without behavior changetest: Adding or updating testsdocs: Documentation changeschore: Maintenance tasksperf: Performance improvementsstyle: Formatting changes
-
Push to remote:
git push -u origin branch-name
For features that modify user-facing functionality or data available to AI/integrations:
-
Identify documentation that needs updating:
- User guides or API documentation
- System architecture diagrams
- Feature availability matrices
- AI prompt templates or context
- Integration guides
-
Update relevant documentation files before creating PR
-
Commit documentation separately:
git add docs/ git commit -m "$(cat <<'EOF' docs: Update documentation for [FEATURE] implementation Document implementation details and constraints: - [Key implementation detail 1] - [Key implementation detail 2] - [Edge cases or limitations] Updated files: - [file 1]: [what changed] - [file 2]: [what changed] This ensures team members and integrations have accurate information about actual system behavior and available functionality. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]> EOF )" git push
Examples requiring documentation updates:
- New API endpoints or GraphQL fields
- Changes to data schemas or validation rules
- New features visible to users or AI systems
- Modifications to existing behavior or constraints
- New configuration options or environment variables
Skip documentation if:
- Pure refactoring with no external API changes
- Bug fixes that don't change documented behavior
- Internal implementation details not exposed externally
-
Use project's PR tool:
- GitHub:
gh pr create - GitLab:
glab mr create - Other: Web UI or project-specific CLI
- GitHub:
-
Include comprehensive PR description:
## Summary [What was added/changed and why] ## Implementation Details [Key technical decisions and patterns used] ## Test Plan - [ ] Test case 1 - [ ] Test case 2 - [ ] Manually tested in [environment] ## Related Issues Closes: [ISSUE-ID] 🤖 Generated with [Claude Code](https://claude.com/claude-code)
-
PR description should include:
- Summary of changes
- Implementation details
- Test plan with checkboxes
- Related issue references
- Screenshots/videos (if UI changes)
-
Wait for CI to pass before requesting review
- Compare changes against plan and acceptance criteria
- Update plan or add notes if implementation diverged
- Ensure reviewers have full context in PR description
- Verify all automated checks pass in CI
- Format:
prettier --write - Lint:
eslint --fix - Type:
tscorpnpm typecheck - Test:
jest,vitest,mocha
- Format:
black,autopep8 - Lint:
pylint,flake8,ruff - Type:
mypy - Test:
pytest,unittest
- Format:
rubocop -a(auto-correct) - Lint:
rubocop - Test:
rspec,minitest
- Format:
gofmt,goimports - Lint:
golangci-lint - Type: Built into compiler
- Test:
go test
- Format:
prettier --write(RN) ordart format(Flutter) - Lint:
eslint --fix(RN) ordart analyze(Flutter) - Type:
tsc(RN) or built-in (Flutter) - Test:
jest(RN) orflutter test(Flutter) - Manual: iOS/Android simulator testing
- Read before writing - Always study existing patterns
- Format as you go - Run formatters after EVERY file edit
- Test continuously - Don't wait until the end
- Match patterns exactly - Consistency > personal preference
- Commit cleanly - Only relevant files, proper message format
- Document thoroughly - PRs should be self-explanatory
- Wait for CI - Don't request review until green
When blocked, ALWAYS:
- Document why the standard workflow can't be followed
- Get explicit approval from the user before deviating
- Add notes to commit/PR explaining the deviation
Never:
- Skip formatters/linters to save time
- Commit failing tests "to fix later"
- Include unrelated changes
- Skip manual testing
- Create PRs before CI passes
This workflow assumes:
- Git-based version control
- Some form of CI/CD
- Code review process
- Project-specific quality tools
Adapt as needed for specific projects, but maintain the core principles.