Models.dev is a comprehensive open-source database of AI model specifications, pricing, and capabilities.
There's no single database with information about all the available AI models. We started Models.dev as a community-contributed project to address this. We also use it internally in opencode.
You can access this data through an API.
curl https://models.dev/api.json
Use the Model ID field to do a lookup on any model; it's the identifier used by AI SDK.
Provider logos are available as SVG files:
curl https://models.dev/logos/{provider}.svg
Replace {provider}
with the Provider ID (e.g., anthropic
, openai
, google
). If we don't have a provider's logo, a default logo is served instead.
The data is stored in the repo as TOML files; organized by provider and model. The logo is stored as an SVG. This is used to generate this page and power the API.
We need your help keeping the data up to date.
To add a new model, start by checking if the provider already exists in the providers/
directory. If not, then:
If the provider isn't already in providers/
:
-
Create a new folder in
providers/
with the provider's ID. For example,providers/newprovider/
. -
Add a
provider.toml
with the provider details:name = "Provider Name" npm = "@ai-sdk/provider" # AI SDK Package name env = ["PROVIDER_API_KEY"] # Environment Variable keys used for auth doc = "https://example.com/docs/models" # Link to provider's documentation
If the provider doesn’t publish an npm package but exposes an OpenAI-compatible endpoint, set the npm field accordingly and include the base URL:
npm = "@ai-sdk/openai-compatible" # Use OpenAI-compatible SDK api = "https://api.example.com/v1" # Required with openai-compatible
To add a logo for the provider:
- Add a
logo.svg
file to the provider's directory (e.g.,providers/newprovider/logo.svg
) - Use SVG format with no fixed size or colors - use
currentColor
for fills/strokes
Example SVG structure:
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor">
<!-- Logo paths here -->
</svg>
Create a new TOML file in the provider's models/
directory where the filename is the model ID:
name = "Model Display Name"
attachment = true # or false - supports file attachments
reasoning = false # or true - supports reasoning / chain-of-thought
tool_call = true # or false - supports tool calling
temperature = true # or false - supports temperature control
knowledge = "2024-04" # Knowledge-cutoff date
release_date = "2025-02-19" # First public release date
last_updated = "2025-02-19" # Most recent update date
[cost]
input = 3.00 # Cost per million input tokens (USD)
output = 15.00 # Cost per million output tokens (USD)
cache_read = 0.30 # Cost per million cached read tokens (USD)
cache_write = 3.75 # Cost per million cached write tokens (USD)
[limit]
context = 200_000 # Maximum context window (tokens)
output = 8_192 # Maximum output tokens
[modalities]
input = ["text", "image"] # Supported input modalities
output = ["text"] # Supported output modalities
- Fork this repo
- Create a new branch with your changes
- Add your provider and/or model files
- Open a PR with a clear description
There's a GitHub Action that will automatically validate your submission against our schema to ensure:
- All required fields are present
- Data types are correct
- Values are within acceptable ranges
- TOML syntax is valid
Models must conform to the following schema, as defined in app/schemas.ts
.
Provider Schema:
name
: String - Display name of the providernpm
: String - AI SDK Package nameenv
: String[] - Environment variable keys used for authdoc
: String - Link to the provider's documentationapi
(optional): String - OpenAI-compatible API endpoint. Required when using@ai-sdk/openai-compatible
as the npm package.
Model Schema:
name
: String — Display name of the modelattachment
: Boolean — Supports file attachmentsreasoning
: Boolean — Supports reasoning / chain-of-thoughttool_call
: Boolean - Supports tool callingtemperature
: Boolean — Supports temperature controlknowledge
(optional): String — Knowledge-cutoff date inYYYY-MM
orYYYY-MM-DD
formatrelease_date
: String — First public release date inYYYY-MM
orYYYY-MM-DD
last_updated
: String — Most recent update date inYYYY-MM
orYYYY-MM-DD
cost.input
(optional): Number — Cost per million input tokens (USD)cost.output
(optional): Number — Cost per million output tokens (USD)cost.cache_read
(optional): Number — Cost per million cached read tokens (USD)cost.cache_write
(optional): Number — Cost per million cached write tokens (USD)limit.context
: Number — Maximum context window (tokens)limit.output
: Number — Maximum output tokensmodalities.input
: Array of strings — Supported input modalities (e.g., ["text", "image", "audio", "video", "pdf"])modalities.output
: Array of strings — Supported output modalities (e.g., ["text"])
See existing providers in the providers/
directory for reference:
providers/anthropic/
- Anthropic Claude modelsproviders/openai/
- OpenAI GPT modelsproviders/google/
- Google Gemini models
Make sure you have Bun installed.
$ bun install
$ cd packages/web
$ bun run dev
And it'll open the frontend at http://localhost:3000
Open an issue if you need help or have questions about contributing.
Models.dev is created by the maintainers of SST.