I post this as an issue, not because its a bug, but there is a likely uninteded ‘gap’ in how fibery provides its AI services and models, especially in automations.
Simple actual use case, which I use regularly:
Audio transcript of a meeting of 1 hour.
I have an automation that creates a semantic summary out of that. That means that all topics are extracted and analyzed.
When in an autioation action I use the built-in GPT-4o-latest, most of the times the output is trucated, it literelly chops off part of the output, likely because of lack of output tokens.
When I use the new automation action ‘Overwrite Output via Smart Agent’ then there is no problem, and it has much better output.
(although the Overwrite using Smart Agent is slow, it is unclear if this Smart Agent actually does do more, like checking fibery databases, than it should for a particular automation action, since there is no Smart Agent Log available ← need).
Sadly, Fibery Smart Agent is expensive and limited in runs per month.
Thus, I can’t use either 4o-latest nor Fibery Smart Agent for this.
Can you confirm this will have some attention at the fibery team?
I just want to know if we will see some more choices for LLM models to use in automations, or in any AI use in fibery overall.