Search within the Fibery AI chat history

Hi Fibery team :slightly_smiling_face:

It would be very useful to have search within the Fibery AI chat history.

As conversations with the AI assistant grow over time, it becomes harder to quickly find previous answers or topics that were already discussed.

Often the information we need was already generated earlier, but without search it is difficult to locate it, which leads to asking the same questions again.

Suggestion:

Add the ability to search inside past AI conversations, for example by:

  • keywords

  • topics

  • previous prompts or answers

This would make it much easier to reuse previous insights and avoid repeating the same queries.

For teams actively using Fibery AI, this could significantly improve the usability of AI conversations and turn them into a more useful knowledge resource over time.

P.S. And it will be useful if we could rename chat.

3 Likes

Though I use the Fibery AI chat/build less, in favour of external chats connected via MCP, I still do manually go through history to find the correct session to resume, which is easier in my case due to having fewer chats in the history.

Also, I agree that being able to search chat history/sessions would be very helpful for those users of ours that would opt for the internal Fibery AI more often.

I still remember how annoying it was to NOT have a chat history search function in the older ChatGPT desktop app.

1 Like

Support this too
seems to be a limitation of just about all LLM’s / AI tools, never found one that supports this. Fibery does keep all the chats with no deletion which is great, I trialed Notion and it deletes out automatically after 30 days so you lose a lot


In general I think good search with good indexing at the forefront is going to become a bigger need both here in Fibery and in the AI tools connected, because with AI there is now a huge proliferation of content, which often is stuff you don’t want to lose track of, but don’t have any obvious or time-efficient way to dump somewhere since AI chats can get fragmented in the subject matter discussed


1 Like

Thanks for sharing this perspective, Sev!:+1:

In our case we also have Claude connected via MCP, and our architects tend to use it more actively for deeper analysis and external workflows.

However, I think search inside the Fibery AI chat history might actually be even more important for regular workspace members. :thinking:

Many users in our team will interact with Fibery AI simply while working with data, numbers, and entities inside the workspace. They ask quick questions about the database, fields, entities, or reports during their daily work.

For these users, AI chats will become a kind of working conversation history, and being able to quickly search previous answers would be extremely helpful.

Without search, people (I think) will end up asking the same questions again because it’s easier than manually scrolling through past chats. :grinning_face:

Hi I agree with you, but I wonder how your users are not hitting the low limits of Fibery AI?

1 Like

Yes, we also noticed that the limits can be reached quite quickly :thinking: . From our observations, the limits seem to be consumed much faster in Build mode, so our current plan is roughly the following:

  1. Encourage non-power users to primarily use the Ask mode for quick questions about data, entities, fields, etc., instead of Build.

  2. For more complex creation or structural work, we plan to use Claude via MCP, which will likely be used mostly by architects or more technical users.

  3. I also suggested a feature idea to the Fibery team to allow selecting the AI model in chat.
    For many simple questions, a lighter and faster model would be more than enough.
    Together with chat history search, this could also reduce repeated questions and unnecessary load on the AI.

  4. And as another possible option — if AI becomes a critical daily workflow and the limits remain restrictive — it might make sense to consider a workspace-level option to purchase additional AI credits when needed.

We are still experimenting with the best setup, but this seems like a reasonable direction so far :slightly_smiling_face: .

:thinking: That sounds great, but it goes against what I was told by the support team last month (see the text I made bold):

Under PRO plan you got limited and unlimited AI features.

UNLIMITED
⏩ Text assistant – Helps you write and edit in rich text fields
⏩ Semantic search – Search with AI that understands meaning, not just keywords
⏩ Find highlights – Automatically extracts key points from text
⏩ AI in Automations – AI steps in automation rules for processing and generating content
⏩ Video & audio transcription (almost unlimited) – Converts recordings to text (50 hours/month + 5 hours per paid seat on Pro)

This is “free”. No limits.

LIMITED
⏩ AI Smart agent – left panel (Ask, Build) AI agent. Can answer anything about your space + build stuff for you. You got 100 “questions” per month. And +25 more calls for each paid seat.
⏩ Ask AI to Configure – also count towards 100/mo total

WHEN YOU HIT THAT LIMIT
You can’t use limited AI features no more. You either upgrade or simply wait until reset. Can’t pay over limit as of now.

CHECK STATUS
You can check how much you’ve used here. Help & Support → Workspace status. You can see me almost reaching that limit in one of my workspaces (I mostly used Build, btw).

Perhaps @dmytro can help shed some light on whether this has changed, or if the chat mode deducts fractions of questions versus the build mode?

Thanks a lot — this makes the limits much clearer now! :+1: :grinning_face:

It also makes me wonder too whether Ask mode and Build mode might internally have different “weights” in terms of system load (and maybe prices via API :grinning_face_with_smiling_eyes: , which would actually make sense.

If that’s the case, it might be interesting to consider adding a workspace setting that allows restricting available AI modes by user group.

For example:

  • Non-power users → Ask mode only

  • Architects / power users → Ask + Build

This could help teams manage AI usage more efficiently and prevent heavy Build operations from being used unintentionally when a simple Ask query would be enough.

And as mentioned in your message above, since purchasing additional usage beyond the limit isn’t available yet, this kind of control could appear in the future. [since AI seems to be a major focus area for Fibery right now :thinking:]

1 Like

Yes agree but also Fibery has stated on roadmap they will remove seperate ask/build mode so they will need to come up with something else.

And critically AI is now so integral to the system that I think a proper credit system (with choice of models) is necessary. It cannot be allowed to run out during the month with no way to “top up” otherwise the work stops. I also don’t want people thinking too much about “should I make this request?” its hard enough to get them to even use the features in the first place.

The “Workplace status” I didn’t previously know and is exactly what I was looking for, thanks! The Automations and Syncs should broken down by which steps/dbs are actually consuming it so we can optimise / avoid hitting limits.

1 Like

We will work on new pricing for AI very soon

3 Likes

I just had an idea that may be worth considering.

Separating AI use type, and therefore limits by whether Architect mode is enabled or not

This would allow:

  • architects to have heavy, build-mode-style AI sessions to modify the schema, build views, etc. by enabling Architect mode. This could have lower limits.
  • both regular users and architects (with architect mode off) to have lighter, non-build-mode-style AI sessions. This could have higher limits.

Ironically, Ask mode consumes way more tokens than Build mode. But we are working on this problem now

As I was writing that previous message, I asked myself that! And you will not consider BYOK (Bring your Own Key) at least as an option?

Unlikely. But most likely we will start to sell credits

I do not blame you. It makes sense for the end-user, and you can have much better control over the AI feature. The problem I see, is how to price it so that it does not make clients think “For that price, I will just have my users use Claude, etc. via Slack, etc. where they have access to our other tools (mcp) and skills.”

For example, thinking out loudly here:

  • If in Claude the user can say “Book an urgent appointment for the team to discuss topic x.” due to existence of SOP skills, and the system would ask it for whcih of the next few days.
  • But in Fibery Chat the user would have to say “Create a meeting request in Meetings, with timing=urgent, invitees = the stakeholders field of the entity from Projects where Name looks like ‘topic x’.”
  • How do we solve this lack of SOPs/Skills in the Fibery AI?
  • Do we even need to solve it? Or it is best to leave that kind of usage to external AI, and keep bolstering the Fibery Remote MCP server?

Exactly what we want in general :slight_smile:

I think this is what many tools are converging around. The core LLM is our primary interface and the others are tools that connect to it.

2 Likes

I see. I cannot say that I disagree. And you have done an amazing job of improving the Fibery MCP server, and I am sure there is even more to come. Thanks team.