The new Chatbot is very expensive since it still uses GPT-4.
It costs me between 10 and 20 euro per day, when I actively use it to work with my fibery content.
Please make it run on GPT-3.5 Turbo, much cheaper.
The new Chatbot is very expensive since it still uses GPT-4.
It costs me between 10 and 20 euro per day, when I actively use it to work with my fibery content.
Please make it run on GPT-3.5 Turbo, much cheaper.
Wow how much you use it? We will definitely add an option to select the model in future versions, just curious about your use case
My usage: Mostly large pieces of code input to analyze.
–
According to openai pricing page for gpt-4:
$0.03 per 1K context tokens
$0.06 per 1K generated tokens
On 2 Feb I used:
124 API Requests GPT-4-0613
Context tokens: 456 x 0.03 = $13.67
Generated tokens: 39 x 0.06 = $2.34
Total $16.03
On February 2:
Average amount of input (context) tokens used per request was approx. 3,677 tokens.
Average amount of output (generated) tokens per request was approx. 315 tokens.
The combined average was approximately 3,992 tokens.
The combined average price per request was approx. $0.13
Do you find utilizing the Fibery Chatbot enhances your efficiency in code analysis compared to employing the native ChatGPT interface?
Good question!
The Fibery Chatbot (in my workspace) is very useful for me, but mainly because I have in my fibery workspace:
See: https://the.fibery.io/@public/ai-assistant/guide
This appears not very smart, it mostly dodges answers because it can’t find an answer in the User Guide. If it has an answer, it is very short. Because the user guide is relatively simple, the User Guide Assistant is not helpful for me. It has potential though if the Fibery team decides to hook it up with this Discourse platform, or more code examples made public.
With both My own Fibery Chatbot and the User Guide Assistant I have issues of freezing screens when inserting long inputs (see also 'Page Unresponsive' issue often happening with the AI Assistant chatbot
Although ChatGPT Plus uses also GPT4, in my experience it does much better than the OpenAI API GPT4. Answers are deeper and context is far better kept.
ChatGPT Plus is therefore still best of all in:
I use in Fibery using scripts, which is the most promising and useful for now because I can automate chats and answers in entities and perform operations on them. This surpasses any of the above in usefulness when conversations directly relate to my fibery content.
Greatest drawback is its not well developed internet search. New regulations also make it not give details of website content.
Gemini Pro is the best of all in real time internet search and its ability to create google spreadsheets in one button click is amazing.
Its intelligence is still mediocre, it loses context and cant prioritize. But in tandem with ChatGPT Plus its works well.
Currently I’m learning to work with HuggingFace
I’m creating a system that guides AI in conversations, its in development but already very helpful in maintaining context, priorities, reusing solutions and creating insights beyond its standard window of tokens. You can do this also simply by telling it to keep a simulated database with operations that need to be repeated and kept track of, including version naming etc.
P.s. Dont use free versions of any LLM, most of that is not useful for coding or deeper conversations.
Agree that this is a pretty useful case, especially since it’s likely that AI assisted script creation in Fibery will not be implemented.
Overall for us, I’ve found that AI in Fibery is most useful for:
I think we’ll use it a lot more once it can really understand all Fibery content.
It would be convenient to have an option with OpenRouter.ai that allows us to select any compatible model.