Skip to content

Instantly share code, notes, and snippets.

@fat-tire
Last active May 22, 2025 05:34
Show Gist options
  • Select an option

  • Save fat-tire/6a1848ac1a6c8756f0cc9a2c05e9289c to your computer and use it in GitHub Desktop.

Select an option

Save fat-tire/6a1848ac1a6c8756f0cc9a2c05e9289c to your computer and use it in GitHub Desktop.
{
"models": [
{
"name": "Gemma-3n-E2B-it-int4",
"modelId": "google/gemma-3n-E2B-it-litert-preview",
"modelFile": "gemma-3n-E2B-it-int4.task",
"description": "Preview version of [Gemma 3n E2B](https://ai.google.dev/gemma/docs/gemma-3n) ready for deployment on Android using the [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference). The current checkpoint only supports text and vision input, with 4096 context length.",
"sizeInBytes": 3136226711,
"version": "20250520",
"llmSupportImage": true,
"defaultConfig": {
"topK": 64,
"topP": 0.95,
"temperature": 1.0,
"maxTokens": 4096,
"accelerators": "cpu,gpu"
},
"taskTypes": ["llm_chat", "llm_prompt_lab", "llm_ask_image"]
},
{
"name": "Gemma-3n-E4B-it-int4",
"modelId": "google/gemma-3n-E4B-it-litert-preview",
"modelFile": "gemma-3n-E4B-it-int4.task",
"description": "Preview version of [Gemma 3n E4B](https://ai.google.dev/gemma/docs/gemma-3n) ready for deployment on Android using the [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference). The current checkpoint only supports text and vision input, with 4096 context length.",
"sizeInBytes": 4405655031,
"version": "20250520",
"llmSupportImage": true,
"defaultConfig": {
"topK": 64,
"topP": 0.95,
"temperature": 1.0,
"maxTokens": 4096,
"accelerators": "cpu,gpu"
},
"taskTypes": ["llm_chat", "llm_prompt_lab", "llm_ask_image"]
},
{
"name": "Gemma3-1B-IT q4",
"modelId": "litert-community/Gemma3-1B-IT",
"modelFile": "gemma3-4b-it-int4-web.task",
"description": "A variant of [google/Gemma-3-1B-IT](https://huggingface.co/google/Gemma-3-1B-IT) with 4-bit quantization ready for deployment on Android using the [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference)",
"sizeInBytes": 2559442944,
"version": "d0c59ed7e8fff9122d0bff4392ed37dc6bd98067a0fb31458ee58dd7f2442510",
"defaultConfig": {
"topK": 64,
"topP": 0.95,
"temperature": 1.0,
"maxTokens": 32768,
"accelerators": "gpu,cpu"
},
"taskTypes": ["llm_chat", "llm_prompt_lab"]
},
{
"name": "Qwen2.5-1.5B-Instruct q8",
"modelId": "litert-community/Qwen2.5-1.5B-Instruct",
"modelFile": "Qwen2.5-1.5B-Instruct_multi-prefill-seq_q8_ekv1280.task",
"description": "A variant of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) with 8-bit quantization ready for deployment on Android using the [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference)",
"sizeInBytes": 1625493432,
"version": "20250514",
"defaultConfig": {
"topK": 40,
"topP": 0.95,
"temperature": 1.0,
"maxTokens": 1280,
"accelerators": "cpu"
},
"taskTypes": ["llm_chat", "llm_prompt_lab"]
},
{
"name": "Hammer2.1-1.5b q8",
"modelId": "litert-community/Hammer2.1-1.5b",
"modelFile": "Hammer2.1-1.5b_multi-prefill-seq_q8_ekv1280.task",
"description": "Hammer 2.1 model with strong function calling capability. These models are based on the Qwen 2.5 coder series and utilize function masking techniques and other advanced technologies. Hammer 2.1 series bring significant enhancements, while still maintaining the basic functionality of Hammer 2.0's Single-Turn interaction and further strengthening other capabilities."
"sizeInBytes": 1625493432,
"version": "4e4a594e06ead9ad93e5a09a60eeea932561136cef0edd572d258144b42de6a2",
"defaultConfig": {
"topK": 40,
"topP": 0.95,
"temperature": 0.0,
"maxTokens": 1280,
"accelerators": "gpu,cpu"
},
"taskTypes": ["llm_chat", "llm_prompt_lab"]
},
{
"name": "Phi-4-mini-instruct q8",
"modelId": "litert-community/Phi-4-mini-instruct",
"modelFile": "Phi-4-mini-instruct_multi-prefill-seq_q8_ekv1280.task",
"description": "Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length.",
"sizeInBytes": 3944275882,
"version": "e494f9e827fbf47ac271d67dde77308f4a7683ad1ff630ab5bec926f17573b5f",
"defaultConfig": {
"topK": 40,
"topP": 0.95,
"temperature": 0.0,
"maxTokens": 1280,
"accelerators": "cpu"
},
"taskTypes": ["llm_chat", "llm_prompt_lab"]
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment