Both and Dex present on context engineering for AI, exploring user and builder roles. They discuss Claude for non-code tasks, custom workflows, and future implications, emphasizing user-centric design.
- Context engineering is vital for all AI users and builders, enhancing agent performance.
- Becoming a better agent user improves agent builder skills by revealing UX patterns.
- Frustration arises when LLMs spit out text, requiring cancellation and message editing.
- LLMs are stateless functions; quality of input tokens dictates output answer quality.
- Context window includes system messages, RAG data, user messages, instructions, and memories.
- Agent builders must provide users insight into how their AI tools function.
- Hiding AI implementation details makes it harder for users to leverage the tool.
- Users gain amazing content creation skills by understanding feedback loops in social media.
- Building coding agents for technical users necessitates thoughtful context engineering strategies.
- Claude Code excels at non-coding tasks, capable of writing its own scripts effectively.
- Agent builders must carefully consider their users' technical sophistication spectrum.
- Effective UX intervenes in context building, making it seamless for end users.
- More engineering work allows for full user control without feeling overwhelming.
- A Git repo can manage non-technical things like a CRM using markdown documents.
- Markdown files with front matter function as a flexible, structureless knowledge graph.
- Improved models and agent harnesses now enable advanced capabilities, previously impossible.
- Deterministic context packing scales across thousands of contacts efficiently, preventing overload.
- Front matter in markdown files provides quick, relevant information for agents, reducing noise.
- Giving the exact answer or highly relevant data to the model is optimal.
- Minimizing irrelevant noise for the model is crucial for effective context engineering.
- Claude can build programmatic ways to work with information, handing back new tools.
- SOPs (Standard Operating Procedures) written as prompts guide AI agents for daily tasks.
- Automated standup updates, backed by Git PRs, save time and ensure accurate reporting.
- Software 1.0 (pure code) and LLM-driven software (flexible, sometimes breaks) coexist.
- Tolerance for software inconsistency dictates whether to use traditional code or LLMs.
- The definition of "engineering" constantly evolves, encompassing new LLM-driven approaches.
- Writing Python for common tasks, C for performance, analogous to LLM-driven vs. code.
- Pinpointing AI-driven parts allows baking deterministic workflows into faster, reliable code.
- Consolidating multiple API calls into one script saves money and accelerates agent operations.
- Models perform better with higher-level tools like "read file," reducing mental effort.
- Agents and workflows are composable building blocks, with infinite depth depending on complexity.
- Claude Code struggles with large files; engineers should optimize codebase for smaller files.
- The hardest part of using Claude Code is efficiently utilizing its planning and research phases.
- Complex coding problems can be solved faster with Claude, despite requiring cleanup.
- Markdown files as a database offer a lossy, flexible V0 that can be optimized later.
- Deterministic context packing ensures core base context for models, streamlining workflows.
- Claude MD is suffixed with "may not be relevant," potentially ignored by the model.
- Understanding agent restrictions is crucial; Claude Code lacks dependency on background tasks.
- Proxying Claude requests reveals the system prompt, tool descriptions, and sub-agent instructions.
- Debugging AI involves understanding API behavior when current agent behavior falters.
- Performance engineering involves focusing optimization efforts on critical system parts.
- Lossy AI output is acceptable for tasks like changelogs or contact history, not requiring perfection.
- Working backwards from desired workflows helps define and refine AI agent tasks.
- Software and agentic loops improve over time, extending capabilities with feedback.
- Project management software like GitHub and Linear still provide essential collaboration spaces.
- Compacting context, either manually or via tools, prevents context window overload.
- A file as a source of truth allows restarting context, extremely powerful for agent steering.
- MCPs (Multi-Call Pipelines) are API calls; if built poorly, agents won't work well.
- Completely taking over user context lifts the floor but significantly lowers the ceiling.
- Better agent users become better agent builders, understanding practical UX limitations.
- LLMs, as stateless functions, demand high-quality input tokens for optimal output generation.
- Agent builders must expose some system insights for users to effectively "hack" and improve outcomes.
- Claude Code excels beyond coding, leveraging its scripting ability for diverse non-technical tasks.
- Structured markdown files can simulate databases, offering flexible, evolving knowledge graphs for agents.
- Deterministic context packing is key for scaling, ensuring agents receive only relevant, concise information.
- Front matter in documents strategically prioritizes critical information for faster agent decision-making.
- SOPs as prompts allow systematic automation of manual tasks, becoming essential agent commands.
- Software 1.0 (code) and 3.0 (LLM-driven) are complementary, chosen based on consistency tolerance.
- Optimizing deterministic parts of AI workflows into code saves money and boosts reliability.
- Understanding an agent's internal workings, like system prompts, is vital for advanced user interaction.
- Effective context engineering focuses on optimizing critical system parts, not every single token.
- AI's lossy nature is acceptable for many tasks, prioritizing directional correctness over perfection.
- Custom context compaction is crucial; default methods often lack the required model attention.
- Overriding user context improves novice experience but limits power user capabilities significantly.
- The definition of engineering is shifting; what we call AI tasks today will be engineering tomorrow.
- Composable agents and workflows provide flexible building blocks for tackling complex problems effectively.
- Prioritizing a single source of truth file enables robust agent context management and recovery.
- API wrappers improve poorly designed MCPs, transforming bad interfaces into efficient context feeders.
- Continuously improving agentic loops and tooling expands AI capabilities, even with static models.
"Every single one of us has to think about context engineering along that loop." – Both
"I actually become a better agent builder by becoming a better agent user." – Both
"LMs are stateless functions… the quality of the tokens you put in affects the answer." – Dex
"Claude can kind of just write its own scripts." – Dex
"SOPs become your prompts and your commands." – Dex
(and more, as captured in the source text)
- Actively context engineer Claude to generate desired code output.
- Consider UX patterns when designing agents.
- Cancel and refine prompts when LLMs go off track.
- Use structured context windows for better performance.
- Maintain CRM-like data in markdown files with front matter.
- Apply deterministic context packing for large-scale use.
- Summarize long files for Claude Code’s efficiency.
- Create SOPs as prompts to automate daily tasks.
- Journal regularly for model memory.
- Automate standups with Git PR summarization.
- Convert stable workflows into faster TypeScript scripts.
- Optimize codebases into smaller files for Claude Code.
- Compact context often to prevent overload.
- LLMs are stateless functions.
- Claude Code’s system prompt is code-focused but works more broadly.
- Models improve over time, enabling emergent capabilities.
- Markdown supports deterministic metadata via front matter.
- LLM usage cost is proportional to context window size.
- Claude caches at most four context segments.
- Claude Code allows background tasks but not dependencies.
- Proxying Claude reveals internal prompts and tool definitions.
- The definition of "engineering" has shifted over time (C → Python → LLM-driven).
- YouTube channel Boundary
- BAML (programming language)
- Human Layer (company)
- ChatGPT (LLM)
- Salesforce, Linear, Airtable (CRM tools)
- GitHub, Obsidian
- Anthropic API
- Python, TypeScript, Rust, WASM, C, C++, Assembly
- Cursor (editor), GH CLI, VS Code extension
- Superhuman (email client)
- Twitter, LinkedIn, Discord
- Claude for non-code tasks: 🦄 #20 (video)
Mastering context engineering is essential for building and effectively using advanced AI agents.
- Continuously practice context engineering for better AI results.
- Improve agent building by actively using agents.
- Expose system details so power users can optimize outcomes.
- Leverage Claude Code for both coding and non-coding tasks.
- Use markdown + Git as flexible knowledge bases.
- Build SOPs as prompts for repeatable automation.
- Write deterministic code for critical workflows.
- Break down complex problems into manageable workflows.
- Regularly compact context to save costs and improve performance.
- Maintain a single source of truth file for resets.
- Wrap poor APIs with efficient wrappers.
- Balance abstractions with user control.
- Provide free educational content to raise the bar for everyone.