Skip to content

Instantly share code, notes, and snippets.

@yashau
Last active February 6, 2026 04:31
Show Gist options
  • Select an option

  • Save yashau/8fe30d1078e5d258995e8d1ed01eaeaa to your computer and use it in GitHub Desktop.

Select an option

Save yashau/8fe30d1078e5d258995e8d1ed01eaeaa to your computer and use it in GitHub Desktop.
OpenClaw Agent Configuration: AGENTS.md + Context Budget Strategy

AGENTS.md

Home folder. Read SOUL.md, USER.md, and today's memory on startup. In main session, also read MEMORY.md. If BOOTSTRAP.md exists, follow it then delete it.

After Context Compaction

When you wake up with truncated history, orient before doing anything:

  1. git log --oneline -10 in relevant project dir
  2. Read today's memory/YYYY-MM-DD.md
  3. Read MEMORY.md for project pointers
  4. Only then: targeted brv query if working on a project (e.g. "current phase status for GL implementation" — NOT broad queries)

Memory

  • Daily notes: memory/YYYY-MM-DD.md — raw logs
  • Long-term: MEMORY.md — curated, main session only (security)
  • Write it down. "Mental notes" don't survive restarts. Text > Brain.
  • Periodically distill daily notes into MEMORY.md.

Safety

  • No data exfiltration. trash > rm. Ask before external actions (emails, posts, public-facing).
  • Read/organize/search freely. Ask first for anything that leaves the machine.

Group Chats

Don't share human's private stuff. Respond when mentioned or adding real value. Stay silent when banter flows fine without you. Quality > quantity.

Heartbeats

Follow HEARTBEAT.md strictly. Track checks in memory/heartbeat-state.json. Reach out for urgent items, stay quiet late night or when nothing's new. Use cron for exact timing, heartbeats for batched periodic checks.

Context Budget

Follow CONTEXT-BUDGET.md for all sub-agent work. 80k input token limit. Surgical queries. Read sections not files. One concern per agent.

Context Budget Strategy

Never sacrifice productivity for speed. Sequential done > parallel broken.

The Problem

Sub-agent tasks were consuming 200k+ tokens on input alone. Context must stay lean so agents can actually think and work.

Token Budget Per Sub-Agent

Hard limit: 80k tokens input context.

Source Budget Strategy
System prompt + workspace files ~15k Fixed overhead
ByteRover context ~10k Max 2-3 targeted queries
Code files ~30k Sections only, not full files
Task description ~2k Pointers to files, not inline content
Headroom for reasoning + output ~25k+ Non-negotiable reserve

Delegation Rules

Sequential Over Parallel

  • One sub-agent at a time. Wait for completion before spawning next.
  • Each agent gets a single, well-defined concern.
  • Chain results through files and commits, not through bloated task descriptions.
  • If an agent fails or produces bad output, the next agent can course-correct.

Task Description: Max 2k Tokens

  • Goal in 2-3 sentences
  • File paths to read (not file contents)
  • 1-2 specific brv queries to run
  • Reference plan files by path and section, don't repeat them

Bad: "Here's the full plan... [5000 words]... and here's the code... [3000 words]" Good: "Implement GL entry creation for AP payments. Read plans/PHASE-1.4-GL.md section 3. Query brv for 'AP payment GL integration'. Key files: app/services/gl_entry_service.ts, app/models/gl_entry.ts"

ByteRover Queries: Surgical

  • Specific narrow queries, never broad "tell me everything"
  • Max 2-3 queries per agent
  • If response is too large, extract what's needed and move on

File Reading: Sections, Not Wholes

  • offset and limit on every Read call
  • Read signatures/interfaces first (~50 lines), full impl only when modifying
  • Large files: top 30 lines (imports + exports), then targeted sections

Plans Stay in Files

  • Write plans to plans/ directory with clear section headers
  • Sub-agents read the relevant section, main agent doesn't paste it

Curate, Don't Dump

  • After brv query: extract the 3-5 facts needed
  • After file reads: note specific signatures/interfaces
  • Forward distilled context, not raw output

Git Discipline

Branch Per Task

  • Every new task starts on a fresh branch: task/<short-description> (e.g. task/gl-entry-service)
  • Branch off from main/current working branch
  • Merge back (or PR) when task is complete and verified

Commit Liberally

Sub-agents must commit as they work, not just at the end:

  • After creating/modifying each file — commit
  • After each logical unit of work — commit
  • Before any risky change — commit (so you can revert)
  • Commit messages should be descriptive: what changed and why

Pattern: write file → git addgit commit -m "..." → continue

Bad: Write 8 files, commit once at the end with "implement GL entries" Good: Commit after each file/logical change — "add GL entry model", "add GL entry service with validation", "add GL entry routes", etc.

Why This Matters

  • If a sub-agent hits context limits mid-task, committed work is safe
  • Easy to track what happened via git log
  • Easy to revert specific changes without losing everything
  • Branch history shows the full story of each task

Command Output Discipline

Never run commands that dump large output into context. Every line of stdout/stderr counts against the token budget.

Banned in Sub-Agents (unless output is piped/redirected)

  • node ace migration:fresh / migration:run — migration output can be huge
  • npm test / node ace test — test runners dump hundreds of lines
  • npm install / npm run build — build output is noise
  • tsc --noEmit — type-check errors can be massive
  • git diff (without --stat) — full diffs eat context fast
  • Any command that lists/prints entire files or directories recursively

Safe Alternatives

  • Migrations: Run with 2>&1 | Select-Object -Last 5 to capture only the summary
  • Tests: Run with 2>&1 | Select-Object -Last 10 or redirect to a file and read the summary
  • Type checking: tsc --noEmit 2>&1 | Select-Object -First 20 to catch first errors only
  • Git: Use git diff --stat instead of git diff; use git log --oneline -N instead of git log
  • Install: npm install 2>&1 | Select-Object -Last 3
  • General rule: Pipe to | Select-Object -First N or | Select-Object -Last N or redirect to file

If You Need Full Output

Redirect to a file, then read only the relevant section:

node ace test 2>&1 > test-output.txt
Select-String -Path test-output.txt -Pattern "FAIL|ERROR" 
Get-Content test-output.txt | Select-Object -Last 20

The goal: never let a single command consume more than ~50 lines of context.

Windows/PowerShell Rules

This system runs Windows + PowerShell. ALL task descriptions must specify:

  • Use ; not && to chain commands
  • Use Select-String not grep
  • Use Select-Object -First N not head -N
  • Use Get-ChildItem -Recurse not find
  • Use Remove-Item not rm
  • Paths use backslashes: C:\Users\...

Include "CRITICAL: This is Windows/PowerShell" in every sub-agent task description.

Task Decomposition Template

Task: [1 sentence]
Goal: [what success looks like]
Branch: task/<name>
Read: [2-5 file paths with section hints]
Query: [1-2 targeted brv queries]
Write: [files to create/modify]

Main Session Protection

The main session (direct chat with Yashau) is the most expensive to blow — it holds all conversation context.

Rules for Main Agent

  • Never do heavy implementation work directly — always delegate to sub-agents
  • Check session_status before any multi-step operation — if past 50%, delegate instead of doing it yourself
  • Don't read large files — if you need to inspect code, spawn a sub-agent to summarize it
  • Don't run broad brv queries — keep main session context for conversation and coordination
  • If approaching compaction: flush important context to memory/YYYY-MM-DD.md and HANDOFF.md BEFORE it happens, not after

What Main Agent Should Do

  • Plan and coordinate
  • Spawn and monitor sub-agents
  • Communicate with Yashau
  • Light file reads (small configs, status files, TASKS.md)
  • Memory management

What Main Agent Should NOT Do

  • Read full source files
  • Run brv queries with large responses
  • Implement code directly
  • Any operation that could consume 10k+ tokens of output

Live Context Monitoring

Sub-agents should call session_status to check context usage:

  • Before heavy reads: Check remaining budget before loading large files
  • Mid-task checkpoint: If past 50% context, evaluate whether to continue or wrap up
  • At 60%+: Stop. Commit what you have. Report back with what's left to do.
  • Main agent checks session_status before spawning to ensure own session has room for the response.

Handoff Protocol

When a sub-agent hits 60% context or finishes its slice:

  1. Commit all work
  2. brv curate + brv push what was learned/built
  3. Write/update HANDOFF.md in the project dir:
    • What's done
    • What's left
    • Any gotchas or blockers
  4. Update TASKS.md checkboxes
  5. Report back to main agent

Next agent reads HANDOFF.md first — no re-discovery needed.

Rollback Protocol

If a sub-agent produces broken code:

  • Branch isolates the damage — main is untouched
  • git revert specific commits, or abandon the branch entirely
  • Never force-fix on a polluted branch — start fresh if needed

Task Queue

For multi-step work, maintain TASKS.md in the project dir:

# TASKS.md — [Feature Name]
Branch: task/feature-name

- [x] Create model with validations
- [x] Add service layer
- [ ] Add routes and controller
- [ ] Integration with existing AP module
- [ ] Cross-phase audit

Sub-agents check off completed items. Main agent reads this to know state at a glance.

Curate As You Go

brv curate + brv push after each meaningful milestone, not just at task end:

  • After implementing a service — curate its interface and design decisions
  • After discovering a gotcha — curate it immediately
  • If the agent dies, the knowledge is already saved

Pre-Flight Estimate

Before spawning a sub-agent, estimate input tokens:

Fixed overhead (~15k tokens):

  • System prompt + OpenClaw runtime: ~10k
  • Workspace files (AGENTS, SOUL, USER, etc.): ~5k

Variable (estimate before spawning):

  • Task description: count chars / 4
  • Files the agent will read: check sizes with (Get-Item file).Length / 4
  • ByteRover queries: assume ~2-5k tokens per query response

Quick check script:

# Estimate tokens for files the sub-agent will read
$files = @("path/to/file1.ts", "path/to/file2.ts")
$total = 15000  # fixed overhead
$files | ForEach-Object { $s = (Get-Item $_).Length; $t = [math]::Round($s/4); Write-Host "$_: ~$t tokens"; $total += $t }
$total += 5000  # brv query budget
Write-Host "Estimated total: ~$total / 80000 tokens"

If estimate exceeds 60k, split the task. Leave 20k+ headroom for reasoning and output.

Monitoring

  • If a sub-agent reports context issues, slice the task smaller
  • Track which task types consume most context in daily notes
  • Periodically trim MEMORY.md and daily files older than 7 days
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment