April 3, 2023 / 👽 Fibery AI (Create Space with AI help, Text Assistant, AI in Automations)

Today we’ve released first iteration of Fibery AI

We experimented with new GPT technology and discovered that it is already possible to radically change how you will interact with Fibery, including workspace creation, information organization and overall productivity. Here is the blog post that reveals our approach and future plans, it contains all important details, check it out.

In this first iteration we have several things released.

AI Space Creation

Create new workspaces in minutes, without the need for extensive knowledge of building blocks. Fibery AI Assistant can create a new space with databases, relations, fields and views based on a brief domain problem explanation.

NOTE: Space creation takes time, up to several minutes in some cases. Here is the guide with some prompt examples.

AI Text Assistant

In previous experimental release Text Assistant was not great, but now we added chat-like interface that works perfectly with our panels.

Prompt templates (commands) is a good way to handle repetitive actions. Note that you can create your own private commands or make them public for the whole team. Check AI Text Assistant user guide.

AI in Automations

Automations are cool to handle cases like automatic summarization, information extraction, automatic translation, etc. Check some examples here:

Call AI action is supported for Text, URL, Phone, and Number fields.

Here is AI in Automations user guide. Note that you need your own OpenAI API key to make them work.


Amazing, as usual. Looking forward to testing :fire:


So glad to see it happening!
it wasn’t long since I posted a bunch of links of possibilities, and now they’re here, it’s like magic


It would be cool to see your use cases and get any other feedback!


Awesome! :grinning:

Looking forward to AI-supported Vizydrop Report generation and Knowledge Assistant!


Yes, all of this stuff is awesome and really great to see so relatively quickly (AI space creation in particular; I will have to give this a workout ASAP!). But I am especially looking forward to AI-driven visualization since, try as I might, I often can’t manage anything more than fairly simple visualizations, despite the power and relative easy of use (e.g. drag-and-drop, etc.) of Vizydrop.

I am hopeful that, once you have AI-driven assistance for these normally major complex areas of friction (Space setup and visualization), Fibery will be much easier for people to get started with and use to its fullest, and thus adoption will soar. Nobody else seems to be doing things quite this cool with AI for use cases beyond simple text manipulation and it is :fire: IMO!


The recent AI focus has been timely for us and I’ve been working on a number of applications of fibery to assist with the messy frontend of our content production process.

I had been doing some experimentation with openai’s API directly to demonstrate some discrete examples before trying out what has been integrated in fibery. I think what has been implemented has been fantastic, but I wanted to highlight some areas where fibery’s structure around the API is somewhat limiting.

  • Model configuration. Fibery really should allow specifying the model manually because openai is constantly iterating on the models. Right now we can’t specify the model we are using for ai actions in automations, but can no longer in rich text, but we could before. We are also blocked from using fine tuned models due to this limitation. Fine tuned models we were using to get more predictable results in some cases, but found using the new ChatGPT (turbo) API with examples could mitigate the need for fine tuned models. However, that isn’t possible either…
  • Full chat API support - so, I assume you technically support this API behind the scenes, but in my experience with more nuanced transformations I’ve noticed it is important to sometimes provide a couple prompt/completion pairs before the ultimate prompt so there is a clearer set of examples for how you want the completion formatted (system, user, and assistant roles can be used). I’ve done some extensive testing and found this approach to be much, much more effective than trying to pack it all into a single prompt. I typically have a system role message to provide context or the mission of the assistant, then a couple user, then assistant prompt/response pairs, then a final user role with the prompt I want completed.
  • Update field from rich text. At the moment I don’t see an easy way to highlight something in rich text, then choose some ai prompt, and have the output saved into a specific field. The capability as is seems so well suited to processing messy, unstructured data, but it is just missing the piece to quickly update a field. For example, you might have tons of text extracted from a webpage. You might want to have the user highlight a subset of that text and tell it to summarize in some style, then update the page summary field from highlighted text.
  • Action buttons associated with fields. With content production combined with ai/automation, you sometimes want to err on the side of caution early on and make the content team feel in control. So, instead of always generating some text via API when the input data changes, I found I often want to have a button next to a field. Like a field of “Title” might have an action button of “Update Title”, or really you might want 3 different ways to update the title. This pattern is very unwieldy with fibery right now and isn’t as intuitive as it could be

That you for sharing your apparoach very interesting. I’m going through prompting guide myself and discovered that right prompt is not as easy as it seem at first.

Here is a quick code snippet of how it looks in practice. I’m excluding the string variables, but you can see the basic structure. I forgot that I was only just a single prompt then completion pair in this case.

    res = openai.ChatCompletion.create(
                {"role": "system", "content": context},
                {"role": "user", "content": example_input},
                {"role": "assistant", "content": example_output},
                {"role": "user", "content": content}
1 Like

It will be relatively hard to give these options to a user. Design will be not easy and we don’t want to complicate things so far. But we have this in mind for the future.

1 Like

This is awesome! Possibilities are endless! I’m currently designing prompts for content creation purposes. So far I’ve created a table with content ideas based on selected input and I’ve created prompts for metaphors and anologies based on the selected input. Works great!

Feedback so far

  • It would be nice if there is a more easier way to edit existing prompts. My prompt for the content calendar is quite huge. As far as I know, I can only edit it if I first run it first.
  • Is it possible to let the user ‘visually know’ that AI is processing the (quite large) prompt? It takes a while. That is not a problem but as a user you don’t really know if it’s running or not.
  • The output of my prompt is a table with content ideas. The table has 2 columns (title and description) and 10 rows (each row is a content idea). Would be awesome if I can somehow select that table and create content entities for each row.

I get that it is somewhat complicated, but the example is the most important part. I would also challenge that to say your support of using openai in templates is far more complicated than what I’m anticipating.

What I’m talking about is when you save a prompt to use again, you could provide an example then completion as part of that, which is passed through into the query as I outlined. Even a single prompt completion example is far, far more useful than a long initial prompt. Having a single prompt to explain what you want and then to provide the prompt is very ambiguous to the model.

I had a good bit of trouble getting outputs even remotely close to what would be usable from your current capability. I know what I demonstrated is somewhat technical, but I think you could easily make that very intuitive by allowing the option to provide a single example input and output, rather than relying on each user to do that inconsistently within a single prompt.

1 Like

@rothnic I do wonder if there are two different approaches - a)user education to write better prompts or b) restricting them toward specific prompt structure. I find examples don’t always work well or sometimes not need if question is simple one to use zero-shot.
I feel when users are pushed into specific prompt structure it will ultimately limix flexibility of other users. WDYT?

Though maybe with a bit of magick - GitHub - ucinlp/autoprompt: AutoPrompt: Automatic Prompt Construction for Masked Language Models. both problems can be solved.

In my personal experience, I was able to get way more accurate response from LLM after changing the structure of initial prompt only.
And I believe that langchain only works on initial prompt only, isn’t it? I might be wrong there as I haven’t run traces with gpt-3.5, it could be doing something extra when it’s using chatty model.


Your are right, here is the example of ReAct pattern with a single prompt that does magic with some python help.

Yes, so far you have to run the prompt to edit it. We are collecting feedback here and will improve it in the future, it is far from ideal now.

Hmm, there should be some loader when prompt is processing. Maybe I missed the problem?

Can you show an example? I don’t get the problem so far.

@mdubakov In this video I explain both situations :slight_smile:

1 Like

Thanks a lot! We will change visual indicator. As for rows in a table - it is more challenging, we will think about this use case

Great thanks!

Another solution is also fine off course. The goal is to create entities with Name + description based on AI output :smile:

The particular use case for me is matching a particular writing style that previously I could only get good results using fine tuned models.

Essentially, we are taking unstructured product details and transforming those into a succinct summary. This involves a combination of requirements on the output around what kinds of things are important, formatting, and the more difficult part, what not to include. I discussed a few different approaches in the ChatGPT hacking discord and referenced openai’s docs around prompting and the described approach is what produced the most predictable output.

If you are doing simple summarization, extraction of structured data, or transformation without strict output requirements, a single prompt works well. However, when you have more specific output format, style, or things you don’t want included (opposite of extraction), I found the examples to be the most straightforward and intuitive method to get that, and which is suggested by openai. It really comes down to where in the past you’d need to fine tune a model.

1 Like