Replies: 2 comments 5 replies
-
Alternatively, a more general solution would be to add the option to register a function that overrides the virtual text displayed. In this case, implementation of pricing computation and total price would be up to the user's configuration, but it would be less specific and potentially support different usecases Simple example: require("codecompanion").setup({
strategies = {
chat = {
vitual_text_fn = function(chat, req_info) -- will need a better name :)
local price = (req_info.tokens.response * RES_TOKEN_PRICE) + (req_info.tokens.prompt * PROMPT_TOKEN_PRICE)
local price_str = string.format("%.3f", price)
local tokens = req_info.tokens.response + req_info.tokens.prompt
return "( " .. tostring(tokens) .. " tokens | price $" .. price_str .. " )"
end
}
}
}) More work on the user, but less coupling in the CC codebase |
Beta Was this translation helpful? Give feedback.
1 reply
-
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Before using CodeCompanion I used gptcli.
CodeCompanion is better for me in every single way, however the one QoL touch I miss from gpt-cli is request cost information.
See this screenshot:

Where the cost of each query is displayed showing 3 bits of information:
Tokens | Cost of last request | Total cost since start of conversation
These cost values are quite useful, especially as a chat grows. Even small queries can end up costing a lot as all previous messages are included in the context.
I'm not familiar with the non-OpenAi models, but I know the OpenAi API does not provide query cost. gpt-cli achieves this by keeping a table of per-token pricing in its source and calculating query cost based on the tokens spent (see example in source here)
The goal of this post is to start a discussion, I will begin with what I think is a sensible suggestion:
My Suggestion
It should not be CodeCompanion's or it's author(s)' responsibility to keep up and maintain tables of various LLM costs across multiple providers. This would be a tedious chore and impede progress on other features.
I would be great if users could optionally provide pricing information to CodeCompanion which would then add monetary cost displays.
E.g. the config could look like this:
With this, the default behavior for any other model remains the same as it is today, but for
"gpt-4o"
the chat buffer could look like this:Beta Was this translation helpful? Give feedback.
All reactions