Skip to content

Instantly share code, notes, and snippets.

@thomasgroch
Last active August 31, 2025 23:17
Show Gist options
  • Select an option

  • Save thomasgroch/8ab3cfe9d1b01cad3ef4fdaff7c1fc46 to your computer and use it in GitHub Desktop.

Select an option

Save thomasgroch/8ab3cfe9d1b01cad3ef4fdaff7c1fc46 to your computer and use it in GitHub Desktop.

πŸ“š Ollama Model Library (ordered by Min RAM)

Model Parameters Size Download Command Min RAM
Gemma 3 1B 815MB ollama run gemma3:1b 4 GB 🧩
Llama 3.2 1B 1.3GB ollama run llama3.2:1b 4 GB 🧩
Moondream 2 1.4B 829MB ollama run moondream 4 GB 🧩
Gemma 3 4B 3.3GB ollama run gemma3 6 GB 🧩
Llama 3.2 3B 2.0GB ollama run llama3.2 6 GB 🧩
Phi 4 Mini 3.8B 2.5GB ollama run phi4-mini 6 GB 🧩
DeepSeek-R1 7B 4.7GB ollama run deepseek-r1 πŸ–₯️ 8 GB
Llama 3.1 8B 4.7GB ollama run llama3.1 πŸ–₯️ 8 GB
Mistral 7B 4.1GB ollama run mistral πŸ–₯️ 8 GB
Neural Chat 7B 4.1GB ollama run neural-chat πŸ–₯️ 8 GB
Starling 7B 4.1GB ollama run starling-lm πŸ–₯️ 8 GB
Code Llama 7B 3.8GB ollama run codellama πŸ–₯️ 8 GB
Llama 2 Uncensored 7B 3.8GB ollama run llama2-uncensored πŸ–₯️ 8 GB
LLaVA 7B 4.5GB ollama run llava πŸ–₯️ 8 GB
Granite-3.3 8B 4.9GB ollama run granite3.3 πŸ–₯️ 8 GB
Gemma 3 12B 8.1GB ollama run gemma3:12b πŸ’» 16 GB
Llama 3.2 Vision 11B 7.9GB ollama run llama3.2-vision πŸ’» 16 GB
Phi 4 14B 9.1GB ollama run phi4 πŸ’» 16 GB
Gemma 3 27B 17GB ollama run gemma3:27b πŸš€ 32 GB
QwQ 32B 20GB ollama run qwq πŸš€ 32 GB
Llama 3.3 70B 43GB ollama run llama3.3 πŸ›‘ 64 GB+
Llama 3.2 Vision 90B 55GB ollama run llama3.2-vision:90b πŸ›‘ 64 GB+
Llama 4 109B 67GB ollama run llama4:scout πŸ›‘ 128 GB+
DeepSeek-R1 671B 404GB ollama run deepseek-r1:671b πŸ›‘ 512 GB+
Llama 3.1 405B 231GB ollama run llama3.1:405b πŸ›‘ 512 GB+
Llama 4 400B 245GB ollama run llama4:maverick πŸ›‘ 512 GB+

7B models β†’ πŸ–₯️ 8 GB

13B models β†’ πŸ’» 16 GB

33B+ models β†’ πŸš€ 32 GB+)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment