Skip to content

Instantly share code, notes, and snippets.

@theotherzach
Last active December 5, 2025 16:14
Show Gist options
  • Select an option

  • Save theotherzach/c72896a4f8d8547968f6f87493f613e6 to your computer and use it in GitHub Desktop.

Select an option

Save theotherzach/c72896a4f8d8547968f6f87493f613e6 to your computer and use it in GitHub Desktop.

Autonomous Claude Code Workflow for Multi-Repo Development

Here’s the exact workflow for autonomous agents operating across 2 repos:

Setup Phase

1. Configure Claude Code for Multi-Repo Work

# In your workspace directory
mkdir my-workspace
cd my-workspace

# Clone both repos
git clone [email protected]:yourorg/rails-api.git
git clone [email protected]:yourorg/react-frontend.git

# Create a workspace config file
cat > .claude-code-config.json << EOF
{
  "repos": {
    "api": "./rails-api",
    "frontend": "./react-frontend"
  },
  "conventions": {
    "branchPrefix": "claude/",
    "commitStyle": "conventional"
  }
}
EOF

2. Create Team Conventions File

# In workspace root
cat > TEAM_CONVENTIONS.md << EOF
# Rails API Conventions
- Use service objects for complex business logic (app/services/)
- Controller actions: validate params with strong_params, call service, return JSON
- Always include error handling with rescue_from
- Tests: RSpec with FactoryBot
- API responses follow JSON:API format

# React/TypeScript Conventions  
- Functional components with TypeScript interfaces
- Hooks: useState, useEffect, custom hooks in /hooks
- API calls via axios in /services/api
- Component structure: components/FeatureName/ComponentName.tsx
- Styling: Tailwind CSS utility classes
- Props interfaces named ComponentNameProps

# Git Conventions
- Branch naming: feature/description or fix/description
- Commit format: type(scope): message
- PRs require: description, testing notes, screenshots for UI
EOF

Exact Workflow

Session 1: Initial Feature Creation

Your Prompt to Claude Code:

Read TEAM_CONVENTIONS.md. I need a new feature across both repos:

BACKEND (rails-api repo):
- New GET endpoint: /api/v1/user_stats/:user_id
- Returns: { user_id, total_orders, total_spent, member_since, loyalty_tier }
- Add route, controller action, service object for business logic
- Include error handling for user not found
- Write RSpec tests

FRONTEND (react-frontend repo):
- New component: UserStatsCard
- Displays the 4 stats in a card layout with icons
- Fetches from the API endpoint
- Shows loading state and error handling
- Responsive design with Tailwind

Create feature branches in both repos, implement the code, run tests, and commit with proper messages. Show me what you're doing at each step.

Claude Code will:

  1. Switch to rails-api repo, create branch feature/user-stats-endpoint
  2. Create files:
  • config/routes.rb - add route
  • app/controllers/api/v1/user_stats_controller.rb
  • app/services/user_stats_service.rb
  • spec/requests/api/v1/user_stats_spec.rb
  • spec/services/user_stats_service_spec.rb
  1. Run bundle exec rspec to verify tests pass
  2. Commit: feat(api): add user stats endpoint
  3. Switch to react-frontend repo, create branch feature/user-stats-card
  4. Create files:
  • src/components/UserStats/UserStatsCard.tsx
  • src/components/UserStats/UserStatsCard.types.ts
  • src/services/api/userStats.ts
  • src/components/UserStats/__tests__/UserStatsCard.test.tsx
  1. Run npm test to verify tests pass
  2. Commit: feat(ui): add user stats card component

Session 2: Review & Feedback

Your Prompt:

Show me the diff for both repos. Specifically:
1. The controller code in rails-api
2. The React component in react-frontend

Claude Code shows you the code. You review and find issues:

Your Feedback:

Changes needed:

BACKEND:
- Add caching to the service object (Rails.cache, 5 min TTL)
- Change loyalty_tier to return enum string not integer
- Add pagination params even though we're not using them yet

FRONTEND:
- Use our custom useApi hook instead of raw axios
- Add a refresh button to the card
- Icons should be from lucide-react library
- Add prop-types for runtime validation

Make these changes and commit them.

Claude Code will:

  1. Switch to rails-api, make changes:
  • Update app/services/user_stats_service.rb with caching
  • Modify enum serialization
  • Add pagination params to controller
  1. Run tests, fix any failures
  2. Commit: refactor(api): add caching and improve user stats response
  3. Switch to react-frontend:
  • Update to use useApi hook
  • Add refresh button with onClick handler
  • Import and use lucide-react icons
  • Add PropTypes
  1. Run tests, fix failures
  2. Commit: refactor(ui): improve user stats card with hooks and icons

Session 3: Integration Testing

Your Prompt:

In rails-api:
- Start the dev server
- Create a test user and generate sample data
- Test the endpoint with curl

In react-frontend:
- Update the API base URL to point to local Rails server
- Start the dev server
- Give me the URL to view the component in isolation

Claude Code will:

  1. API repo: rails s -p 3000
  2. Run rails console commands to seed data
  3. Execute curl command and show response
  4. Frontend repo: Update .env.development
  5. Run npm start
  6. Create a test page or Storybook story for isolated viewing

Session 4: Create Pull Requests

Your Prompt:

Everything looks good. Create PRs for both repos:

PR Title format: "[Feature] User Stats Display"

PR Description should include:
- What this adds
- Backend changes summary
- Frontend changes summary  
- Testing steps
- Link to related PR in other repo

Push both branches and give me the PR URLs.

Claude Code will:

  1. In rails-api:

    git push origin feature/user-stats-endpoint
    gh pr create --title "[Feature] User Stats Display - API" \
      --body "$(cat pr-template-api.md)"
  2. In react-frontend:

    git push origin feature/user-stats-card
    gh pr create --title "[Feature] User Stats Display - UI" \
      --body "$(cat pr-template-frontend.md)"
  3. Returns both PR URLs with links between them

Advanced: Autonomous Agent Prompt

For fully autonomous operation, use this mega-prompt:

Act as an autonomous development agent. Read TEAM_CONVENTIONS.md.

TASK: Implement user statistics feature
- Backend: GET /api/v1/user_stats/:user_id endpoint
- Frontend: UserStatsCard component
- Follow all team conventions
- Write tests for everything
- Handle errors appropriately

WORKFLOW:
1. Create feature branches in both repos
2. Implement backend first, run tests
3. Implement frontend, run tests  
4. Run both servers and verify integration
5. If tests fail, debug and fix
6. Show me the implementation when ready
7. Wait for my feedback
8. Make any requested changes
9. Create PRs when I approve

DECISION MAKING:
- If you're unsure about business logic, ASK ME
- If tests fail, try to fix them autonomously first
- If you need external dependencies, ASK before installing
- Commit after each logical unit of work

Begin by telling me your implementation plan.

Pro Tips

For Context Persistence:

  • Create a TASK.md file in workspace root that Claude Code can reference
  • Update it as requirements change
  • Include links to Slack threads, Figma designs, etc.

For Code Review Agent:

# After Claude Code creates PR
claude-code "Review the PR at [URL]. Check for:
- Security issues
- Performance problems  
- Missing tests
- Convention violations
Add review comments directly to the PR."

For Automated Fixes:

# When PR gets feedback
claude-code "Address all comments on PR #123 in rails-api repo. 
Make changes, run tests, push updates, and reply to each comment."

This workflow gives you autonomous agents that can operate across repositories while keeping you in control of approvals and major decisions.​​​​​​​​​​​​​​​​

@davemo
Copy link

davemo commented Dec 5, 2025

Hey Zach,

Thanks for sharing this—it's a solid foundation, and I can tell you've put thought into the structure. Here are some observations that might help refine it:

On the Multi-Repo Approach

The separate Rails API + React frontend setup feels a bit dated. Most teams I'm seeing have moved back toward monorepos (or Rails with Hotwire/Turbo), which sidesteps the context-switching overhead entirely. If you're committed to separate repos, you might run into drift between duplicated workflow configs. A monorepo with server and client at the root would simplify the Claude Code workflow significantly—no repo switching, shared conventions in one place.

Team Conventions Need More Teeth

The conventions file is a good idea, but it's too vague to consistently shape output. A few specifics:

  • "Functional components with TypeScript interfaces" — This could mean a lot of things. Be more prescriptive about prop typing patterns, whether you prefer inline types vs separate interface files, etc.

  • Hooks guidance — I'd actually add explicit directives to minimize useState and especially useEffect. LLMs love to reach for these, and you often end up with overcomplicated state management. Something like "prefer derived state over useState; avoid useEffect for data fetching" would help.

  • Custom hooks in /hooks — I'd push back on this as a default pattern. Colocation (keeping hooks in the same file as the component using them) reduces complexity. The reuse-first mindset creates orchestration overhead that rarely pays off.

  • Axios — Has fallen out of favor. Most teams are on React Query / TanStack Query now, or just native fetch. Worth updating if you want the conventions to feel current.

Workflow Sequencing

Your four-session structure is logical, but I'd reorder a few things based on what's worked for me:

  1. Let feature creation run further before review — I usually let Claude push commits to a branch and get something working end-to-end before doing detailed code review. Intermediate reviews can create churn.

  2. Manual testing before PR — I do hands-on testing in the browser first, iterate on the interaction/UX, and then open the PR. The code review happens after I'm happy with what I'm actually seeing.

  3. Integration tests last — I write the Playwright tests after the feature is stable, not during initial implementation. They're a way to lock in behavior, not discover it.

Integration Testing Specifics

Session 3 is underspecified on how testing happens. I'd recommend:

  • Playwright with UI mode — The Playwright MCP has been really effective for this kind of thing
  • Chrome DevTools MCP — For lower-level browser interaction if needed

Just "start the dev server and give me a URL" leaves a lot on the table.

Persistence & Collaboration

One thing I'd add: push plans and context to GitHub Issues, not just chat history. When Claude creates an implementation plan, having it file an issue means:

  • Other team members can see the reasoning
  • Intermediate artifacts are preserved (not just final code)
  • PRs can reference the issue for full context

This is especially valuable if you're working with a team vs. solo.

RSpec Tests

"Write RSpec tests" is underspecified. If you have examples of good vs. bad tests in your codebase, link to them or include a nested directive. LLMs produce much better test output when they have concrete examples of your testing style.


Overall, the bones are good—especially the explicit approval gates and the "ask me if unsure" directives. The main opportunities are: tightening the conventions so output is more consistent, and adjusting the workflow sequence to match how iteration actually happens in practice.

Hope this helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment