- Model ID:
gemini-2.0-flash - Provider: Google
- Strengths: Fast responses, cost-effective, good for iterations
- Context: 1M tokens
Add to ~/.config/opencode/config.json:
{
"provider": "openai-compatible",
"apiKey": "your-llmgateway-api-key",
"baseUrl": "https://api.llmgateway.io/v1",
"model": "gemini-2.0-flash"
}Or via environment:
export OPENCODE_PROVIDER="openai-compatible"
export OPENCODE_API_KEY="your-llmgateway-api-key"
export OPENCODE_BASE_URL="https://api.llmgateway.io/v1"
export OPENCODE_MODEL="gemini-2.0-flash"opencode --model gemini-2.0-flash "fix the syntax error on line 42"- Rapid prototyping
- Quick syntax questions
- Simple code completions
- Iterative development with fast feedback
- Large file analysis (1M context)
Budget-friendly - ideal for high-volume, simpler tasks.