Skip to content

Instantly share code, notes, and snippets.

@steebchen
Created January 25, 2026 19:21
Show Gist options
  • Select an option

  • Save steebchen/6845dd5732cc06d9809ae4077153001e to your computer and use it in GitHub Desktop.

Select an option

Save steebchen/6845dd5732cc06d9809ae4077153001e to your computer and use it in GitHub Desktop.
MiniMax M2 with OpenCode via LLM Gateway

MiniMax M2 with OpenCode via LLM Gateway

Model Overview

  • Model ID: minimax-m2
  • Provider: MiniMax
  • Strengths: Long context, efficient inference, strong code generation
  • Context: 1M tokens

Configuration

Add to ~/.config/opencode/config.json:

{
  "provider": "openai-compatible",
  "apiKey": "your-llmgateway-api-key",
  "baseUrl": "https://api.llmgateway.io/v1",
  "model": "minimax-m2"
}

Or via environment:

export OPENCODE_PROVIDER="openai-compatible"
export OPENCODE_API_KEY="your-llmgateway-api-key"
export OPENCODE_BASE_URL="https://api.llmgateway.io/v1"
export OPENCODE_MODEL="minimax-m2"

Usage

opencode --model minimax-m2 "analyze all files in this repository"

Best Use Cases

  • Large codebase analysis (1M context)
  • Repository-wide refactoring
  • Understanding legacy systems
  • Multi-file code reviews
  • Documentation generation for large projects

Pricing Tier

Mid-tier - excellent for large context tasks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment