curl -fsSL https://ollama.com/install.sh | sh
ollama pull glm-4.7-flash # or gpt-oss:20b (for better performance)
curl -fsSL https://claude.ai/install.sh | bash
ollama launch claude --model glm-4.7-flash # or ollama launch claude --model glm-4.7-flash gpt-oss:20b
Hey guys I tried the same but with different model qwen2.5-coder:7b
I tried with the continue extension in vs code and also with the Claude as well, why this is giving the output in only json format
is this something related to the model?