Replies: 2 comments
-
Hey @flefevre - litellm maintainer here -
does your proxy require an api key? it might be getting rejected b/c of that P.S.: Would love to learn how you're using litellm internally. Attaching my calendly if that helps - https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat |
Beta Was this translation helpful? Give feedback.
0 replies
-
I think you only need:
Without the ~/chat/completions |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear Pythagora team,
thanks for the amazing work.
I want to bring your tools to our french public laboratory by installing gptpilot inside a visualstudiocode and by referencing our internal LLM which is served by liteLLM.
I have got the following error:
There was a problem with request to openai API: API responded with status code: 401. Request token size: 23 tokens. Response text: {"error":{"message":"Missing Authentication header or invalid API key","code":401}}
Setup
I Have tried the following setup without success.
my .env file
`ENDPOINT=OPENROUTER
OPENAI_ENDPOINT=https://litellm.dev.localhost/chat/completions
OPENAI_API_KEY=dummy
OPENROUTER_API_KEY=dummy
MODEL_NAME=openai/gemma-2b
MAX_TOKENS=8192
`
I have tried with
MODEL_NAME=openai/gemma-2b
andMODEL_NAME=gemma-2b
When i start vsc, and create a new pythogora project I got the following exception
There was a problem with request to openai API: API responded with status code: 401. Request token size: 23 tokens. Response text: {"error":{"message":"Missing Authentication header or invalid API key","code":401}}
I do not know how to activate more logs to debug it.
I have deactivated the litellm master-key which was only use this if you to require all calls to contain this key (Authorization: Bearer sk-1234). I should at term reactivate it.
I am able to go to https://litellm.dev.localhost/chat/completions and to make a curl request. so it do not come from litellm which seems to be well configured.
curl -X POST https://litellm.dev.localhost/chat/completions -H 'Content-Type: application/json' -d '{ "model": "gemma-2b", "stream": False, "messages": [{ "content": "Hello, how are you?", "role": "user" }]}' {"id":"cmpl-c8f808e876684d62bf0dcb83e694eebf","choices":[{"finish_reason":"stop","index":0,"message":{"content":"I am an AI language model, so I do not have personal feelings or the capacity to experience emotions. I am functioning well and am here to assist you with any questions or tasks you may have. How can I help you today?","role":"assistant"}}],"created":1714913340,"model":"google/gemma-1.1-2b-it","object":"chat.completion","system_fingerprint":null,"usage":{"completion_tokens":48,"prompt_tokens":16,"total_tokens":64}}f
Beta Was this translation helpful? Give feedback.
All reactions