Last active
November 19, 2025 20:25
-
-
Save crmne/301be1d38ff193e7274a69833947139a to your computer and use it in GitHub Desktop.
Proposed multi-provider capabilities and pricing API. Blog post: https://paolino.me/standard-api-llm-capabilities-pricing/
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # This is a YAML file so I can have comments but the API should obviously return an array of models in JSON. | |
| # Legend: | |
| # Required: this is important to have in v1. | |
| # Optional: this is still important but can wait for v2. | |
| id: gpt-4.5-preview # Required, will match it with the OpenAI API | |
| name: GPT-4.5 Preview # Required | |
| provider: openai # Required | |
| family: gpt45 # Optional, each model page is a family for OpenAI models | |
| context_window: 128000 # Required | |
| max_output_tokens: 16384 # Required | |
| knowledge_cutoff: 20231001 # Optional | |
| modalities: | |
| input: # Sort arrays alphabetically to make diffs consistent | |
| - text | |
| - image | |
| - audio | |
| output: | |
| - text | |
| - image | |
| - audio | |
| - embeddings | |
| - moderation | |
| capabilities: | |
| - streaming # Optional | |
| - function_calling # Required | |
| - structured_output # Required | |
| - predicted_outputs # Optional | |
| - distillation # Optional | |
| - fine_tuning # Optional | |
| - batch # Required | |
| - realtime # Optional | |
| - image_generation # Required | |
| - speech_generation # Required | |
| - transcription # Required | |
| - translation # Optional | |
| - citations # Optional - from Anthropic | |
| - reasoning # Optional - called Extended Thinking in Anthropic's lingo | |
| pricing: | |
| text_tokens: | |
| standard: | |
| input_per_million: 75.0 # Required | |
| cached_input_per_million: 37.5 # Required | |
| output_per_million: 150.0 # Required | |
| reasoning_output_per_million: 0 # Optional | |
| batch: | |
| input_per_million: 37.5 # Required | |
| output_per_million: 75.0 # Required | |
| images: | |
| standard: | |
| input: 0.0 # Optional | |
| output: 0.0 # Optional | |
| batch: | |
| input: 0.0 # Optional | |
| output: 0.0 # Optional | |
| audio_tokens: | |
| standard: | |
| input_per_million: 0.0 # Optional | |
| output_per_million: 0.0 # Optional | |
| batch: | |
| input_per_million: 0.0 # Optional | |
| output_per_million: 0.0 # Optional | |
| embeddings: | |
| standard: | |
| input_per_million: 0.0 # Required | |
| batch: | |
| input_per_million: 0.0 # Required |
Author
Thank you so much for the info @MadBomber!
Author
Updated the spec taking inspiration from OpenRouter @raznem @MadBomber
@crmne Any updates on this? That would be a great resource for our LLM client/framework for Smalltalk.
The spec looks very helpful! Specifically for OpenAI but perhaps generalizable, the following addition would be helpful but not desired:
- cached token prices for audio/image input tokens
- availability in endpoints (eg, chat completions, responses, realtime, ...)
- default rate limits per tier (though actual rate limits can vary and are provided by responses from the official API)
- resolved model aliases (e.g.,
gpt-realtimecurrently points togpt-realtime-2025-08-28) - [very maybe also the performance/speed assessments from the official model pages)
It would be great to hear whether this idea has made any progress or how I could maybe contribute!
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I generally get this kind of information from open router's (OR) model API. One of the areas where I think there payload is weak is in the seperation of model capabilities. I like the way you are proposing to sperate capabilities into individual text, image and audio. For multi-modal models I would expect that each "mode" is true. You might consider capabilities as being bi-direction. for example text->image and/or image->text. same with the audio. Does it support audio in and text out (transcription) and/or text in and audio out (text to speech.
One of the areas where I think the OR payload is strong is in the model description. Here is an example: