Give it a job. It finds the best LLM. It runs it.
A single-file CLI tool that takes any task β generate an image, analyze a video, write code, summarize a PDF β queries the full OpenRouter model catalog in real-time, picks the optimal model based on what the task actually needs, and executes it. One command, zero model selection.
There are 400+ models on OpenRouter. You shouldn't have to know which one accepts video, which one is cheapest for code, or which one can do tool calling. Describe what you want done, and the jobrunner figures out the rest.
This is different from OpenRouter's built-in openrouter/auto β that routes across ~6 curated models. The jobrunner searches the entire catalog and picks based on:
- Required modalities (text, image, video, audio, file input β text, image, audio output)
- Budget (set a max $/million tokens, or let it optimize)
- Capabilities (tool calling, structured output, reasoning)
- Context window (need 100K+? 1M? filtered automatically)
- Speed vs quality (prefer free models? cheapest? biggest context?)
git clone https://gist.github.com/<GIST_ID>.git openrouter-jobrunner
cd openrouter-jobrunner
export OPENROUTER_API_KEY="sk-or-..."
# Text task β finds best text model
./jobrunner.sh "Write a bash script that monitors disk usage and sends alerts"
# Image analysis β finds a vision model
./jobrunner.sh "Describe what's in this image" --image https://example.com/photo.jpg
# Video analysis β finds a video-capable model
./jobrunner.sh "Summarize this video" --video https://example.com/clip.mp4
# Code generation β prefers coding models, structured output
./jobrunner.sh "Write a Solidity ERC-20 token with 6 decimals" --prefer coding
# Cheapest possible β free models first
./jobrunner.sh "Translate this to French: Hello world" --budget free
# Force a specific modality filter
./jobrunner.sh "Transcribe this audio" --input-modality audio
# Max budget: $1/M input tokens
./jobrunner.sh "Analyze this codebase for security issues" --max-input-cost 1.0
# See what model it would pick without running
./jobrunner.sh "Explain quantum computing" --dry-runβββββββββββββββββββ
β Your Task β "Analyze this video for safety violations"
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Task Analyzer β Detects: needs video input, text output
β β Infers: reasoning helpful, long output
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Model Catalog β GET /api/v1/models β 400+ models
β (live query) β Filter: input_modalities includes "video"
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Ranker β Score by: modality fit β pricing β context
β β Apply budget/preference constraints
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Execute β POST /api/v1/chat/completions
β β model: "google/gemini-3.1-flash-lite-preview"
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Result β Output + metadata:
β β "Used google/gemini-3.1-flash-lite-preview"
β β "Cost: $0.0003 | Tokens: 847"
βββββββββββββββββββ
| File | What it does |
|---|---|
jobrunner.sh |
Entry point β parses args, orchestrates everything |
jobrunner.py |
Core logic β model discovery, ranking, execution |
requirements.txt |
Just requests (stdlib otherwise) |
Get one at openrouter.ai/keys. Set it:
export OPENROUTER_API_KEY="sk-or-v1-..."Or create a .env file:
OPENROUTER_API_KEY=sk-or-v1-...
from jobrunner import JobRunner
runner = JobRunner(api_key="sk-or-...")
# Find best model without executing
match = runner.find_model(
task="Generate a pixel art sprite sheet",
input_modalities=["text"],
output_modalities=["image"],
max_input_cost=5.0, # $/M tokens
)
print(f"Would use: {match['id']} at ${match['pricing']['prompt']}/token")
# Find and execute
result = runner.run("Write a haiku about Ethereum gas fees")
print(result.content)
print(f"Model: {result.model} | Cost: ${result.cost:.6f}")Drop SKILL.md + jobrunner.py into ~/.openclaw/skills/openrouter-jobrunner/ and any agent can use it:
"Use the openrouter-jobrunner skill to find the best model for analyzing this video and run it"
| Flag | Effect |
|---|---|
--budget free |
Only free models (pricing = "0") |
--budget cheap |
Sort by lowest cost first |
--budget best |
Sort by capability (largest context, most features) |
--prefer coding |
Boost models with "code" in name/description |
--prefer reasoning |
Boost models that support reasoning parameter |
--prefer fast |
Boost models with high max_completion_tokens |
--input-modality X |
Filter to models accepting X (text/image/video/audio/file) |
--output-modality X |
Filter to models outputting X (text/image/audio) |
--min-context N |
Minimum context window (e.g., 100000) |
--max-input-cost N |
Max $/M input tokens (e.g., 1.0) |
--dry-run |
Show selected model + reasoning, don't execute |
--verbose |
Show full model selection reasoning |
--json |
Output result as JSON |
- Full catalog β searches all 400+ OpenRouter models, not a curated subset
- Modality-aware β actually checks if the model can handle your input type
- Live pricing β always uses current pricing from the API
- Transparent β tells you exactly which model it picked and why
- Budget control β from free to frontier, you set the ceiling
- Zero config β one API key, one command, done
# "I need to understand what's happening in this security camera footage"
$ ./jobrunner.sh "Analyze this security footage for unusual activity" \
--video ./footage.mp4 --prefer reasoning --verbose
π Task analysis:
Input: text + video
Output: text (analysis)
Preference: reasoning models
π Catalog: 412 models loaded
After modality filter (video input): 23 models
After preference boost (reasoning): 23 models (8 boosted)
π Selected: google/gemini-3.1-flash-lite-preview
Reason: video input β | reasoning β | $0.25/M input | 1M context
Runner-up: bytedance-seed/seed-2.0-lite ($0.25/M, 262K context)
β‘ Executing...
π Result:
[... analysis output ...]
π° Cost: $0.000847 | Tokens: 2,391 in / 1,203 out# "Generate me some code"
$ ./jobrunner.sh "Write a Python FastAPI server with JWT auth, rate limiting, and PostgreSQL" \
--prefer coding --budget cheap
π Selected: qwen/qwen3.5-9b
Reason: textβtext β | coding boost β | $0.10/M input (cheapest coding match)
β‘ Executing...
[... full FastAPI code ...]
π° Cost: $0.000312MIT β do whatever you want with it.