I’ve been loving the AI features in Fibery, but it would be great if we could choose from different AI models?
I feel like this could really level up our AI game. Different models have different strengths, so we could pick the best one for whatever we’re working on. Plus, it’d be awesome to have options and not be locked into just one AI.
I don’t have a lot of specifics to share for use-cases, but I can say I have definitely found some important differences between AI companies/models. For example Anthropic’s Claude is much better at some things like summarizing articles and other large texts vs. ChatGPT (at least with basic prompting and not spending a lot of time trying to make it do what I want). So I can’t help but wonder if having Claude available in Fibery might not give better results (I don’t know what models are currently being used; I think it’s a bit vague now that the owned-API setting is gone).
While I appreciate the integration of AI features into the platform, I find myself rarely using them.
After testing similar prompts both in Fibery and external tools like ChatGPT and Claude, I consistently get more useful and comprehensive results from the external options.
This experience has gradually pushed me away from Fibery’s built-in AI tools. Instead of staying within your ecosystem, I’m increasingly switching to external AI assistants for tasks I’d prefer to complete inside Fibery.
I’m not sure which specific AI model powers Fibery’s features, but there seems to be a noticeable gap in quality and capability compared to standalone AI tools. If there’s a way to bridge this performance gap or integrate more powerful AI options, it would significantly improve my workflow and keep me within the Fibery environment.
Just wanted to share this perspective as you continue developing your AI features.
Could you provide 3-5 examples of prompts you find work good in these services? Overall, we are working on new AI Agent now that can answer quite complex questions, if you want to try the very early prototype, ping us in Intercom
About the examples…I’ll get some examples and get back to you later…
But what I usually do is:
I have projects on Claude, inside Claude they already have knowledge, important information and general instructions. For example “B2B SALES ADVISOR”
Then, I mix with a specific prompt for a need and information data of the current challenge I need to solve.
Because of this combination and because I’m using the latest Claude Model, it becomes very powerful. Which is something not possible to reach right now with the AI inside Fibery.
Imagine if we could have that inside Fibery, for example an Agent that already writes following the brand and communication guidelines of the company, other specific agents. That would be really awesome.
This guys seems to be getting the right path implementing AI inside their service.
I’d love to see deeper AI integration within Fibery, especially support for external AI models that retain context across runs.
Right now, using Fibery’s AI for something like drafting social media posts means repeatedly providing the same instructions, which wastes tokens and still doesn’t deliver the same context-aware results as a dedicated AI model.
A tighter integration—whether with OpenAI’s Responses API or another approach—could eliminate this redundancy and ensure more accurate and on-brand responses.
Currently fibery automations are stuck with gpt-4-latest, which is not useful at all for the type of tasks that content analysis and larger input/output sizes require. Please allow to choose better models.
I’d love to know people’s experiences/recommendations on what models are best.
I mostly use AI in Fibery to:
Create written summaries/overviews of collections of tasks (entities) that were completed.
Create scripts for automations, since I’m not an actual developer. They rarely work right away, but are usually a decent base that gets me 75% towards what I’m trying to do and I can tweak from there. Maybe if I was using a better model I would get better results though.
Models change rapidly, but after a bit of research:
OpenAI’s o1 offers the highest quality but at a premium cost, making Gemini 2.5 Pro and Claude 3.5 Sonnet more cost-effective alternatives for top-tier performance.
For price/quality balance, Gemini 2.0 Flash and GPT-4o mini excel among proprietary models, while Mistral Medium 3 and Llama 3.1 are ideal for open-source users with hardware.
Grok 3 is competitive but less cost-effective due to higher pricing. The choice depends on budget, infrastructure, and task complexity.