Yes Fibery’s AI automation uses chatgpt-4o-latest thus configured with a total context window of ~16,384 tokens.
But this still runs into issues in most of my cases, e.g. when analyzing large transcripts:
Myinput text (e.g., the transcript),
The markdown template content,
The system prompt (added behind the scenes),
And the model’s response (output tokens). ← !
This means:
If I give it a very long transcript (e.g. 13,000 tokens), the model has very little room left to generate output (only ~3,000 tokens or less). If the output is complex or lengthy (e.g., summaries, structured extraction), it always runs into issues.
I have spent weeks trying to work around this issue. Now I’m fed up and suggest Fibery solves it with a choice of model, because its such a common need (also for example when analyzing ai chats)
Seen the problems I encounter still with Gpt-4o the issue is not proving the token window but more the overall capacity to handle complex input and output.
When Using Grok and Claude Sonnet the issues don’t appear, but that requires my manual handling and copy pasting.
Anyhow, I hope that Fibery will allow users to specify the model always, wherever AI is used.
Also, hopefully Fibery will allow for GPT‑4.1 (plus mini and nano variants) which support up to 1 million tokens of context.