-
-
Notifications
You must be signed in to change notification settings - Fork 155
Feat: Perplexity support #42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Feat: Perplexity support #42
Conversation
Please, let me know if there is something missing on this first implementation Question: How can I run tests locally? |
@joaoGabriel55 I started a pull request here. Do you want to either tweak what I have, or incorporate my changes into your branch? |
@adenta let me know if there is something to improve |
@gquaresma-godaddy we need tests! Want to take a crack at it? |
Sure! I will work on it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks very good. It looks like sonar
has support for vision, so it would need to be added to the chat_content_spec
vision test, and the chat_streaming_spec
.
Two new features to consider in your provider implementation:
|
Added configuration requirements handling in 75f99a1 Each provider now specifies what configuration is required via a simple
Example of the new error messages: RubyLLM::ConfigurationError: anthropic provider is not configured. Add this to your initialization:
RubyLLM.configure do |config|
config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
end |
@joaoGabriel55 is this still on your radar? I'd love to merge Perplexity support soon. Whenever you're ready, could you resolve the conflicts and request a review? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your work!
It looks generally good, I just need some changes and tests. Please pick the cheapest model to test with.
hi @joaoGabriel55 are you able to provide some VCRs for this? |
…aoGabriel55/ruby_llm into feat/add-perplexity-provider
5) RubyLLM::Chat function calling perplexity/sonar can use tools with multi-turn streaming conversations
Failure/Error: raise UnsupportedFunctionsError, "Model #{@model.id} doesn't support function calling"
RubyLLM::UnsupportedFunctionsError:
Model sonar doesn't support function calling
# ./lib/ruby_llm/chat.rb:50:in 'RubyLLM::Chat#with_tool'
# ./spec/ruby_llm/chat_tools_spec.rb:110:in 'block (4 levels) in <top (required)>'
# ./spec/spec_helper.rb:86:in 'block (3 levels) in <top (required)>'
# /Users/quaresma/.rvm/gems/ruby-3.4.1/gems/vcr-6.3.1/lib/vcr/util/variable_args_block_caller.rb:9:in 'VCR::VariableArgsBlockCaller#call_block'
# /Users/quaresma/.rvm/gems/ruby-3.4.1/gems/vcr-6.3.1/lib/vcr.rb:194:in 'VCR#use_cassette'
# ./spec/spec_helper.rb:85:in 'block (2 levels) in <top (required)>'
# /Users/quaresma/.rvm/gems/ruby-3.4.1/gems/webmock-3.25.1/lib/webmock/rspec.rb:39:in 'block (2 levels) in <top (required)>' Any ideas about that 😅 ? This is the sonar model config from the {
"id": "sonar",
"created_at": null,
"display_name": "Sonar",
"provider": "perplexity",
"context_window": 128000,
"max_tokens": 4096,
"type": "chat",
"family": "sonar",
"supports_vision": true,
"supports_functions": false,
"supports_json_mode": true,
"input_price_per_million": 1.0,
"output_price_per_million": 1.0,
"metadata": {
"description": "Lightweight offering with search grounding, quicker and cheaper than Sonar Pro."
}
}, |
spec/spec_helper.rb
Outdated
@@ -118,7 +119,8 @@ | |||
{ provider: :deepseek, model: 'deepseek-chat' }, | |||
{ provider: :openai, model: 'gpt-4.1-nano' }, | |||
{ provider: :openrouter, model: 'anthropic/claude-3.5-haiku' }, | |||
{ provider: :ollama, model: 'mistral-small3.1' } | |||
{ provider: :ollama, model: 'mistral-small3.1' }, | |||
{ provider: :perplexity, model: 'gpt-4.1-nano', } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@joaoGabriel55 I think if you remove perplexity from this array it shouldn't be considered a CHAT_MODEL and wont case the spec failures.
The errors are coming becasue the model doesn't support function calling and right now, the tests are assuming everything does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@adenta no. It is a chat model so it stays in there. Plenty of models and providers have quirks that we already skip. Skipping means that when they fix their quirks we simply remove the skip statement. On top of that if you now have this specific model out of the CHAT_MODELS array, you need to duplicate all the tests only for Perplexity.
Hi @joaoGabriel55 simply skip the test in that case. There are plenty of skipped tests examples because of quirks with specific models. |
Hey @joaoGabriel55 I feel like we're really close to the finish line here! There are only some failing tests (please make sure you have done Really looking forward to having this ship with 1.4! |
Issue
#20
Description
This PR consists on add Perplexity API support