Duration: 15-20 minutes + demos
Speaker: John Lindquist
Tool: markdown-agent (ma) - AI agents defined as markdown files
"What if you could
fork()an AI agent the way you fork a process? What if you could run 10 Copilots in parallel, each in their own git worktree, and aggregate the results?"
The thesis: Software is now instructions + tools. The shift isn't learning how to use AI tools—it's learning how to orchestrate them.
- Chat interfaces are designed for humans, not pipelines
- You can't pipe a conversation
- You can't parallelize a chat window
- Everything is a file → Everything is a markdown file
- Pipes and filters → Agent output becomes agent input
- Spawn and wait → Run agents like subprocesses
# This is how we've always composed tools
cat file.txt | grep pattern | sort | uniq
# This is how we compose agents
ma analyze.copilot.md | ma summarize.claude.md | ma decide.copilot.md- Each agent needs isolation (no merge conflicts mid-operation)
- Each agent can work on a different branch
- You can run N agents on N features simultaneously
project/
├── main/ # Your primary checkout
├── .worktrees/
│ ├── feature-auth/ # Copilot 1 working here
│ ├── feature-api/ # Copilot 2 working here
│ ├── feature-ui/ # Copilot 3 working here
│ └── feature-tests/ # Copilot 4 working here
# Create 4 worktrees
wt new feature-auth feature-api feature-ui feature-tests
# Spawn 4 Copilots in parallel
parallel 'cd .worktrees/{} && ma implement.copilot.md' ::: feature-*
# Watch them all work simultaneously (tmux split view)| Tier | Agent | Cost | Speed | Use Case |
|---|---|---|---|---|
| 1 | Copilot | $ | Fast | Boilerplate, simple fixes |
| 2 | Claude Sonnet | $$ | Medium | Complex refactoring |
| 3 | Claude Opus | $$$ | Slow | Architecture decisions |
---
command: copilot
escalate-to: claude
escalate-when: "confidence < 0.7"
---
Implement the authentication middleware.
If you're uncertain about the security implications, escalate.- Fan out with cheap agents to explore options
- Aggregate the results
- Escalate to expensive agent for final decision
# Map: Each agent analyzes one module
for module in src/*/; do
ma analyze.copilot.md --dir "$module" &
done
wait
# Reduce: One agent synthesizes all analyses
cat results/*.md | ma synthesize.claude.md# Ask 3 agents the same question
ma answer.copilot.md > a.md &
ma answer.gemini.md > b.md &
ma answer.claude.md > c.md &
wait
# Find consensus
ma vote.copilot.md --a a.md --b b.md --c c.md# Each stage can fail independently
ma plan.copilot.md > plan.md || exit 1
ma implement.copilot.md < plan.md > impl.md || exit 1
ma review.claude.md < impl.md > review.md || exit 1
ma fix.copilot.md < review.md > final.md- Show 3 agents answering "How should we structure this API?"
- Display results side-by-side
- Final agent synthesizes the best approach
- Tasks are independent (different files, different features)
- You need multiple perspectives (code review, architecture)
- Exploration is cheap (trying 5 approaches, picking the best)
- Tasks have dependencies (test relies on implementation)
- Context accumulates (each step builds on the last)
- Decisions are irreversible (database migrations)
IF tasks share no files AND can succeed independently
→ PARALLELIZE
ELSE IF later task needs output of earlier task
→ SEQUENCE
ELSE
→ Start parallel, checkpoint, sequence the rest
Setup:
- 4 git worktrees (pre-created)
- 4 feature specs as markdown files
- tmux with 4 panes
The Demo:
# Terminal 1: Launch the orchestra
./orchestrate.sh feature-auth feature-api feature-ui feature-tests
# What's happening:
# - Each worktree gets its own Copilot
# - Each Copilot reads its feature spec
# - Progress streams to a central log
# - When all complete, a Claude agent reviews all PRsShow:
- The 4 agents starting simultaneously
- Real-time progress in split panes
- One agent finishing and creating a PR
- The final Claude review aggregating all changes
Developer → [AI Assistant] → Code
(Human in the loop, one conversation at a time)
Developer → [Orchestration Script] → [Agent₁] → Output
→ [Agent₂] → Output
→ [Agent₃] → Output
↓
[Aggregator Agent] → Final Result
"The future of software development isn't about being good at prompting. It's about being good at orchestration. Think less like a user of AI, more like a conductor of an AI orchestra."
Call to Action:
- Install
markdown-agent:npm install -g @johnlindquist/markdown-agent - Start with one non-interactive agent
- Graduate to piping agents together
- Master parallel execution with worktrees
---
command: copilot
model: gpt-4o
non-interactive: true
---
Implement the feature described in SPEC.md.
Create all necessary files.
Run tests to verify.
Commit with a descriptive message.#!/bin/bash
FEATURES=("$@")
for feature in "${FEATURES[@]}"; do
(
cd ".worktrees/$feature"
ma implement.copilot.md 2>&1 | tee "../../logs/$feature.log"
gh pr create --title "feat: $feature" --body "Automated by ma"
) &
done
wait
echo "All features complete. Running aggregated review..."
ma review-all.claude.md --prs "${FEATURES[@]}"| Section | Duration |
|---|---|
| Hook | 1 min |
| Single to Multi-Agent | 3 min |
| Worktree Pattern | 4 min |
| Agent Specialization | 3 min |
| Coordination Patterns | 4 min |
| Fan Out vs Sequential | 2 min |
| Live Demo | 3 min |
| Closing | 1 min |
| Total | 21 min |
Adjust by trimming coordination patterns or expanding demo as needed.