It would be very useful to have search within the Fibery AI chat history.
As conversations with the AI assistant grow over time, it becomes harder to quickly find previous answers or topics that were already discussed.
Often the information we need was already generated earlier, but without search it is difficult to locate it, which leads to asking the same questions again.
Suggestion:
Add the ability to search inside past AI conversations, for example by:
keywords
topics
previous prompts or answers
This would make it much easier to reuse previous insights and avoid repeating the same queries.
For teams actively using Fibery AI, this could significantly improve the usability of AI conversations and turn them into a more useful knowledge resource over time.
P.S. And it will be useful if we could rename chat.
Though I use the Fibery AI chat/build less, in favour of external chats connected via MCP, I still do manually go through history to find the correct session to resume, which is easier in my case due to having fewer chats in the history.
Also, I agree that being able to search chat history/sessions would be very helpful for those users of ours that would opt for the internal Fibery AI more often.
I still remember how annoying it was to NOT have a chat history search function in the older ChatGPT desktop app.
Support this tooâŠseems to be a limitation of just about all LLMâs / AI tools, never found one that supports this. Fibery does keep all the chats with no deletion which is great, I trialed Notion and it deletes out automatically after 30 days so you lose a lotâŠ
In general I think good search with good indexing at the forefront is going to become a bigger need both here in Fibery and in the AI tools connected, because with AI there is now a huge proliferation of content, which often is stuff you donât want to lose track of, but donât have any obvious or time-efficient way to dump somewhere since AI chats can get fragmented in the subject matter discussedâŠ
In our case we also have Claude connected via MCP, and our architects tend to use it more actively for deeper analysis and external workflows.
However, I think search inside the Fibery AI chat history might actually be even more important for regular workspace members.
Many users in our team will interact with Fibery AI simply while working with data, numbers, and entities inside the workspace. They ask quick questions about the database, fields, entities, or reports during their daily work.
For these users, AI chats will become a kind of working conversation history, and being able to quickly search previous answers would be extremely helpful.
Without search, people (I think) will end up asking the same questions again because itâs easier than manually scrolling through past chats.
Yes, we also noticed that the limits can be reached quite quickly . From our observations, the limits seem to be consumed much faster in Build mode, so our current plan is roughly the following:
Encourage non-power users to primarily use the Ask mode for quick questions about data, entities, fields, etc., instead of Build.
For more complex creation or structural work, we plan to use Claude via MCP, which will likely be used mostly by architects or more technical users.
I also suggested a feature idea to the Fibery team to allow selecting the AI model in chat.
For many simple questions, a lighter and faster model would be more than enough.
Together with chat history search, this could also reduce repeated questions and unnecessary load on the AI.
And as another possible option â if AI becomes a critical daily workflow and the limits remain restrictive â it might make sense to consider a workspace-level option to purchase additional AI credits when needed.
We are still experimenting with the best setup, but this seems like a reasonable direction so far .
WHEN YOU HIT THAT LIMIT
You canât use limited AI features no more. You either upgrade or simply wait until reset. Canât pay over limit as of now.
CHECK STATUS
You can check how much youâve used here. Help & Support â Workspace status. You can see me almost reaching that limit in one of my workspaces (I mostly used Build, btw).
Perhaps @dmytro can help shed some light on whether this has changed, or if the chat mode deducts fractions of questions versus the build mode?
Thanks a lot â this makes the limits much clearer now!
It also makes me wonder too whether Ask mode and Build mode might internally have different âweightsâ in terms of system load (and maybe prices via API , which would actually make sense.
If thatâs the case, it might be interesting to consider adding a workspace setting that allows restricting available AI modes by user group.
For example:
Non-power users â Ask mode only
Architects / power users â Ask + Build
This could help teams manage AI usage more efficiently and prevent heavy Build operations from being used unintentionally when a simple Ask query would be enough.
And as mentioned in your message above, since purchasing additional usage beyond the limit isnât available yet, this kind of control could appear in the future. [since AI seems to be a major focus area for Fibery right now ]
Yes agree but also Fibery has stated on roadmap they will remove seperate ask/build mode so they will need to come up with something else.
And critically AI is now so integral to the system that I think a proper credit system (with choice of models) is necessary. It cannot be allowed to run out during the month with no way to âtop upâ otherwise the work stops. I also donât want people thinking too much about âshould I make this request?â its hard enough to get them to even use the features in the first place.
The âWorkplace statusâ I didnât previously know and is exactly what I was looking for, thanks! The Automations and Syncs should broken down by which steps/dbs are actually consuming it so we can optimise / avoid hitting limits.
Separating AI use type, and therefore limits by whether Architect mode is enabled or not
This would allow:
architects to have heavy, build-mode-style AI sessions to modify the schema, build views, etc. by enabling Architect mode. This could have lower limits.
both regular users and architects (with architect mode off) to have lighter, non-build-mode-style AI sessions. This could have higher limits.
I do not blame you. It makes sense for the end-user, and you can have much better control over the AI feature. The problem I see, is how to price it so that it does not make clients think âFor that price, I will just have my users use Claude, etc. via Slack, etc. where they have access to our other tools (mcp) and skills.â
For example, thinking out loudly here:
If in Claude the user can say âBook an urgent appointment for the team to discuss topic x.â due to existence of SOP skills, and the system would ask it for whcih of the next few days.
But in Fibery Chat the user would have to say âCreate a meeting request in Meetings, with timing=urgent, invitees = the stakeholders field of the entity from Projects where Name looks like âtopic xâ.â
How do we solve this lack of SOPs/Skills in the Fibery AI?
Do we even need to solve it? Or it is best to leave that kind of usage to external AI, and keep bolstering the Fibery Remote MCP server?
I see. I cannot say that I disagree. And you have done an amazing job of improving the Fibery MCP server, and I am sure there is even more to come. Thanks team.