Skip to content

Instantly share code, notes, and snippets.

@Hegghammer
Last active January 26, 2026 21:48
Show Gist options
  • Select an option

  • Save Hegghammer/724f9c557fe00ec3a7f241743c2dc5a2 to your computer and use it in GitHub Desktop.

Select an option

Save Hegghammer/724f9c557fe00ec3a7f241743c2dc5a2 to your computer and use it in GitHub Desktop.
Work with Foam notes in Aider

Terminal-based agentic coding assistants like Claude Code, Codex, Opencode, and Aider are generally used to work with codebases, but nothing prevents us from using them on other types of content.

Here is an example of how to use Aider on a collection of Foam notes. The basic idea can be transposed to any combination of coding assistant and note taking system.

Basic setup

  1. Install Aider

  2. Create a CONVENTIONS.md document and place it in the root of your Foam repo. Fill it with information about the types of notes you have, how you like to structure them, what styling conventions you follow, etc. Think of it as an extensive system prompt. If you're unsure what to include, describe your system in plain language to an LLM and get it to write a CONVENTIONS.md for you.

  3. Create an .aider.conf.yaml file telling Aider to ingest the conventions file at startup.

read:
  - CONVENTIONS.md
  1. Optionally, add an .aiderignore file to exclude files and folders Aider doesn't need to see.
.foam/
.vscode/
  1. Navigate to the folder containing your Foam repo and launch Aider.

Example use cases

  • Clean up or flesh out individual notes or sets of notes
  • Add tags and links to other notes
  • Create new notes from a conversation with the LLM, from a website, from a book, or something else
  • Grab statistics from sets of daily notes and get the LLM to graph it

Model considerations

  • There is an Aider leaderboard
  • For note manipulation you probably don't need a very powerful model; a large context window is more important
  • For personal notes, you probably want to use a locally hosted model
  • Mistral models don't work well with Aider
  • If you don't mind proprietary models, gemini-2.5-flash is fast, cheap, and has a 1M context window
  • Among open source models, qwen3:4b and qwen3:30b work well and have large context windows (256k). Note that you may need to configure the model to not use thinking mode and to override your LLM backend's token limit. You do this by creating an .aider.model.settings.yml file and adding this:
- name: ollama/qwen3:30b # or ollama/qwen3:4b
  streaming: false
  edit_format: diff
  use_repo_map: true
  weak_model_name: ollama/qwen3:4b
  examples_as_sys_msg: true
  system_prompt_prefix: "/no_think "
  extra_params:
    num_ctx: 131072

Other tips

  • With luck, you might be able to fit all your notes in the model's context window. To get the rough number of tokens in the collection (or a subfolder of it), navigate to the directory and run:
    • Mac/Linux: echo "$(( $(find . -type f -name "*.md" -exec cat {} + | wc -c) / 4 )) tokens (approx)"
    • Windows (PowerShell): "$([math]::Floor((gci *.md -Recurse | gc -Raw | measure -Char).Characters / 4)) tokens (approx)"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment