Skip to content

Instantly share code, notes, and snippets.

@steebchen
Created January 25, 2026 17:28
Show Gist options
  • Select an option

  • Save steebchen/187b8de02a5075bcf650f05fddf638cc to your computer and use it in GitHub Desktop.

Select an option

Save steebchen/187b8de02a5075bcf650f05fddf638cc to your computer and use it in GitHub Desktop.
o3 with OpenCode via LLM Gateway

o3 with OpenCode via LLM Gateway

Model Overview

  • Model ID: o3
  • Provider: OpenAI
  • Strengths: Advanced multi-step reasoning, complex problem solving
  • Context: 128K tokens

Configuration

Add to ~/.config/opencode/config.json:

{
  "provider": "openai-compatible",
  "apiKey": "your-llmgateway-api-key",
  "baseUrl": "https://api.llmgateway.io/v1",
  "model": "o3"
}

Or via environment:

export OPENCODE_PROVIDER="openai-compatible"
export OPENCODE_API_KEY="your-llmgateway-api-key"
export OPENCODE_BASE_URL="https://api.llmgateway.io/v1"
export OPENCODE_MODEL="o3"

Usage

opencode --model o3 "optimize this algorithm for O(n) complexity"

Best Use Cases

  • Algorithm design
  • Complex debugging with multiple dependencies
  • Mathematical/logical code problems
  • Performance optimization analysis
  • System design decisions

Notes

  • Responses may take longer due to extended reasoning
  • Consider increasing timeout in config: "timeout": 120000

Pricing Tier

Premium - use for problems requiring deep reasoning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment