Skip to content

Instantly share code, notes, and snippets.

@steebchen
Created January 25, 2026 17:28
Show Gist options
  • Select an option

  • Save steebchen/df5958b58dcd556b271f6530f7cc7567 to your computer and use it in GitHub Desktop.

Select an option

Save steebchen/df5958b58dcd556b271f6530f7cc7567 to your computer and use it in GitHub Desktop.
Gemini 2.0 Flash with OpenCode via LLM Gateway

Gemini 2.0 Flash with OpenCode via LLM Gateway

Model Overview

  • Model ID: gemini-2.0-flash
  • Provider: Google
  • Strengths: Fast responses, cost-effective, good for iterations
  • Context: 1M tokens

Configuration

Add to ~/.config/opencode/config.json:

{
  "provider": "openai-compatible",
  "apiKey": "your-llmgateway-api-key",
  "baseUrl": "https://api.llmgateway.io/v1",
  "model": "gemini-2.0-flash"
}

Or via environment:

export OPENCODE_PROVIDER="openai-compatible"
export OPENCODE_API_KEY="your-llmgateway-api-key"
export OPENCODE_BASE_URL="https://api.llmgateway.io/v1"
export OPENCODE_MODEL="gemini-2.0-flash"

Usage

opencode --model gemini-2.0-flash "fix the syntax error on line 42"

Best Use Cases

  • Rapid prototyping
  • Quick syntax questions
  • Simple code completions
  • Iterative development with fast feedback
  • Large file analysis (1M context)

Pricing Tier

Budget-friendly - ideal for high-volume, simpler tasks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment