Skip to content

Instantly share code, notes, and snippets.

@iam-veeramalla
Last active March 16, 2026 16:17
Show Gist options
  • Select an option

  • Save iam-veeramalla/d0f46791619b0db348d8312060a80f2d to your computer and use it in GitHub Desktop.

Select an option

Save iam-veeramalla/d0f46791619b0db348d8312060a80f2d to your computer and use it in GitHub Desktop.
claude code integration with ollama to use local models

Run Claude with the power of Local LLMs using Ollama

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Pull the Model

ollama pull glm-4.7-flash # or gpt-oss:20b (for better performance)

Install Claude

curl -fsSL https://claude.ai/install.sh | bash

Run Claude with Ollama

ollama launch claude --model glm-4.7-flash # or ollama launch claude --model glm-4.7-flash gpt-oss:20b

@nipodemus
Copy link

I tried it but claude can't see the local filesystem not even the files in the directory I run it from.

I had same problem. Check /permissions, by default almost all usable are denied. I added for example Bash to Allow rules and file access started working.

@medake
Copy link

medake commented Mar 16, 2026

Hi @iam-veeramalla @ShubhmPatil @chinthakindi-saikumar - I am running Windows with PowerShell and set up Ollama with the local model qwen3.5:9b. I am not able to create or read or write the files in the project directory. After a lot of serch I found that Claude’s agent capabilities (filesystem tools, project scanning) exist only in Claude Code CLI, which requires Anthropic authentication and billing. Under Ollama-only execution, Claude behaves as a plain LLM without tool access, so automated codebase analysis is not possible. How did it work in your case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment