-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Description
Summary
opencode currently supports Vercel as a provider but lacks support for Vercel AI Gateway's advanced routing features, specifically the only
and order
provider options. These features are essential for controlling provider fallback behavior and optimizing cost/performance when using Vercel AI Gateway.
This limitation prevents users from:
- Geographic restrictions for data privacy: EU users needing to exclude US/Asia providers for GDPR compliance, or restricting to specific regions for data sovereignty
- Performance optimization: Limiting to ultra-fast providers like Cerebras/Groq for real-time applications, or using only US providers for models like Kimi K2 that have region-specific characteristics or privacy
- Cost control: Routing through preferred providers first while maintaining fallback options
- Compliance requirements: Restricting to certified providers for regulated industries (healthcare, finance)
Current Behavior
When using Vercel AI Gateway in opencode:
- ✅ Authentication with Vercel API key works
- ✅ Models can be accessed through the gateway
- ❌ Cannot specify provider routing order
- ❌ Cannot restrict providers with
only
filter - ❌ No access to gateway-specific provider metadata
Expected Behavior
Users should be able to configure Vercel AI Gateway routing per model through opencode.json
, since different models have different provider availability. The configuration should support both global defaults and per-model overrides:
{
"provider": {
"vercel": {
"options": {
"baseURL": "https://gateway.vercel.sh/v1",
"gateway": {
// Global defaults apply to ALL models unless overridden
"order": ["cerebras", "groq", "fireworks", "chutes"],
"only": ["cerebras", "groq", "fireworks", "chutes", "novita", "parasail", "moonshot"]
}
},
"models": {
"moonshotai/kimi-k2": {
"name": "Kimi K2 (US Only)",
"gateway": {
// Override: Geography preference for data/privacy - add additional provider
"order": ["groq", "fireworks", "deepinfra"],
"only": ["groq", "fireworks", "deepinfra"] // Removes moonshotai, etc.
}
},
"openai/gpt-oss-120b": {
"name": "gpt-oss-120b (Vercel Fast)",
"gateway": {
// Override: Different order but inherits global 'only' filter
"order": ["groq", "cerebras"]
// No 'only' specified - inherits global 'only' list
}
}
}
}
}
}
Users can look up available providers for each model at https://vercel.com/ai-gateway/models and configure routing accordingly.
Technical Details
Required Changes
-
Configuration Schema (
/packages/opencode/src/config/config.ts
)- Add
gateway
object to model configuration within Vercel provider - Support
order
(string array) andonly
(string array) fields per model - Schema should validate provider names against known Vercel AI Gateway providers
- Add
-
Provider Options Structure (
/packages/opencode/src/session/index.ts
)- Current structure:
providerOptions: { [providerID]: options }
- Required structure:
providerOptions: { gateway: { order, only }, [providerID]: options }
- Gateway options should be extracted from the specific model configuration
- The gateway config for the active model should be passed at request time
- Current structure:
-
Provider Implementation (
/packages/opencode/src/provider/provider.ts
)- Update Vercel provider loader to handle per-model gateway options
- Extract gateway configuration based on the model ID
- Pass model-specific gateway configuration to AI SDK
-
Model Resolution & Merging Strategy
- When a Vercel gateway model is selected, extract its specific gateway configuration
- Merge model-level gateway options with provider-level defaults (model config takes precedence)
- Support both global defaults (in provider.options.gateway) and model-specific overrides (in models.[id].gateway)
- If no model-specific config exists, fall back to global defaults
- If neither exists, use Vercel AI Gateway's default behavior
Vercel AI SDK Documentation Reference
According to Vercel AI Gateway documentation:
With the Gateway Provider Options, you can control the routing order and fallback behavior of the models.
import { streamText } from 'ai';
const result = streamText({
model: 'anthropic/claude-3-5-sonnet',
prompt: 'Hello',
providerOptions: {
gateway: {
order: ['bedrock', 'anthropic'], // Try Bedrock first, then Anthropic
only: ['bedrock', 'anthropic'] // Only use these providers
}
}
});
Impact Analysis
Benefits
- Model-Specific Control: Different routing strategies for different models based on their provider availability
- Enhanced Control: Users can optimize for cost, performance, or compliance per model
- Better Fallback: Automatic provider fallback with model-aware priorities
- Cost Visibility: Gateway metadata shows actual provider used and costs
- Enterprise Ready: Supports compliance requirements for provider restrictions
- Regional Optimization: Route models through region-specific providers for optimal performance
Compatibility
- ✅ Backward compatible - existing configs continue working
- ✅ Optional feature - only affects users explicitly configuring gateway options
- ✅ No performance impact for non-gateway users
Proposed Implementation
Phase 1: Core Support (MVP)
- Add gateway options to model configuration schema
- Update provider options structure to support per-model gateway config
- Basic
order
andonly
support per model - Model ID to gateway config mapping
Phase 2: Enhanced Features (Optional)
- Provider metadata in responses
- Cost tracking per provider
- Fallback statistics
Phase 3: Documentation
- Update provider documentation
- Add gateway configuration examples
- Migration guide for existing users
Workaround
Currently, users must use Vercel AI Gateway without routing control, which means:
- Cannot optimize provider selection per model
- Cannot enforce provider restrictions based on model requirements
- Missing visibility into actual provider used for each request
- All models use the same routing strategy regardless of their unique provider availability
Additional Context
- Vercel AI Gateway has been officially released by Vercel
- Cost optimizations by prioritizing certain providers based on price or performance preference
- Enterprise and privacy conscious users specifically need the
only
filter for compliance
Environment
- opencode Version: 0.5.13
- OS: All Currently Supported
- Affects: Vercel AI Gateway users
Acceptance Criteria
- Can configure
order
array per model in opencode.json - Can configure
only
array per model in opencode.json - Provider routing respects model-specific configuration
- Different models can have different routing strategies
- (Optional) Provider metadata shows actual provider used for each model
- Documentation updated with per-model examples
- User can reference Vercel AI Gateway model page for available providers
- No breaking changes to existing configurations
Note: This feature would significantly improve opencode's support for enterprise AI deployments using Vercel AI Gateway, enabling model-specific cost optimization, regional compliance, and performance tuning through granular provider routing control. The per-model configuration approach aligns with Vercel AI Gateway's architecture where different models have different provider availability.