The new chatbot is very expensive, please allow it to use GPT 3.5 Turbo

The new Chatbot is very expensive since it still uses GPT-4.
It costs me between 10 and 20 euro per day, when I actively use it to work with my fibery content.

Please make it run on GPT-3.5 Turbo, much cheaper.

Wow how much you use it? We will definitely add an option to select the model in future versions, just curious about your use case

My usage: Mostly large pieces of code input to analyze.


According to openai pricing page for gpt-4:
$0.03 per 1K context tokens
$0.06 per 1K generated tokens

On 2 Feb I used:
124 API Requests GPT-4-0613
Context tokens: 456 x 0.03 = $13.67
Generated tokens: 39 x 0.06 = $2.34
Total $16.03

On February 2:
Average amount of input (context) tokens used per request was approx. 3,677 tokens.
Average amount of output (generated) tokens per request was approx. 315 tokens.
The combined average was approximately 3,992 tokens.
The combined average price per request was approx. $0.13​

Do you find utilizing the Fibery Chatbot enhances your efficiency in code analysis compared to employing the native ChatGPT interface?

1 Like