Q: What did you do with Fibery GraphQL API? 👂

We released GraphQL API last week and are curious to learn what you did with it.
Please share your use cases and code samples!


I am very new to using GraphQL so has been a lot of trial and error.

I’ve developed a Fault report system for mobile phones using Google Appsheet, Google Sheet, Fibery, Slack and our Mobile Device Management system.

Using Fibery as the backend with all our phones and all information that comes with that, contact information and more as well as Charts and Boards calculating how many days a unit has been deployed before it’s discarded etc.

In the middle of this I’m using Integromat to connect it all and when it comes to graphQL I’ve saved so many operations being able to Get data, being able to pull out all the information I need just using 1 API call from different databases instead of using 5 different ones.

Same goes for attaching Files, instead of using 3-4 different calls I can do it in 1 to attach file from URL.

I would love to see what other people has done and the possibilities of graphQL as I wrote before, I’m very new to using it.


I had a need for creating outputs from the data stored in Fibery. Initially, I used the make.com integration with Documentum to build the outputs. These were incredibly fragile and time-consuming to create. not an ideal solution.

I took the time to learn rudimentary Javascript and GraphQL and created my own set of functions within the Fibery automation environment that work much like an MVC framework. It is about 1,000 lines of code at this point. I personally think the code is ugly, so I don’t plan on posting it, but I’m happy to share if you drop me a message. It is very specific to my Fibery design but might be useful to others in concept.

I build the queries through GraphQL, and the outputs are combined markdown and JavaScript. Once I learned how to properly use GraphiQL and the script output for Fibery, I found the process straightforward enough.

The benefit of this unified approach is that I can use the same script every time and only call the outputs that are pertinent for the data type in question. This one script can be kept in version control and copied and pasted into each automation needed, then modified to call the queries and produce the output types desired. The script works with the PDF, Slack, Email output as well, as both script and rich text replacement automation, by uncommenting or commenting two lines of code when pasting. It’s not ideal, but it’s the best development scenario within Fibery I’ve figured out so far and builds on a common set of code for continual improvements.

There are a number of helper functions too. For example, I created one function that looks for a variable that is set in the script that defines the output type. If the output is of type “fibery,” for example, any data is output in the Fibery active bidirectional link format. If the output is otherwise defined, it will output the name or title of the data instead. Formatting on output is as beautiful as Markdown gets. If the images in the markdown are publicly available (not stored in Fibery), this works well. If the images are stored in Fibery, some additional work needs to be done to get the images to appear in the output. I haven’t spent time on this yet since most of our images in the markdown come from a public share.

I want to extend this to the Make.com integration I was working with prior. I haven’t figured out how to successfully make a query through the API integration yet. If anybody knows good instructions or helpful steps for this setup, please share. I haven’t found one yet.

What I would love to understand how to make is a GraphQL query through the make.com integration. Can somebody show a very quick example of how that would work please?

I love what Fibery makes possible. Thank you so much!


For making a GraphQL request through Make you can just use the graphQL helper if you even need it https://INSERT-YOUR-WORKSPACE.fibery.io/api/graphql/

To get your api Token just use this documentation: https://api.fibery.io/#authentication

for the request itself it’s pretty much just like this with a HTTP module:

I think the Fibery API Module in Make is based on Fiberys old API, before they released GraphQL.
So I don’t think you can use this module to make requests anymore unless you use the old API requests, https://api.fibery.io/#type. Use the HTTP Module.

The GraphQL update is one of the best ones yet imo.


Thank you so much! This is perfect and exactly what I needed. I can’t believe I kept overlooking the simple HTTP connector on Make. It’s one of those moments where one can feel so silly and yet so grateful! Thank you! The instructions are perfect!


@mdubakov we’ve started today :smiley: We will use it to:

  • Send Calendly calls into Fibery (and let Fibery do some magic like linking it to the contact and pull of some automations
  • Send hot leads in Active Campaign to Fibery (and then also a lot of magic)
  • Send WooCommerce orders to Fibery
  • Send proposals of our own software SimplyRight (sign & pay custom offers) to Fibery

So what we basically do is creating records (via N8N instead of Zapier/Make etc.) and then let Fibery do all the magic.

Before we did all the magic in N8N (= open source) because ClickUp can’t do shit :sweat_smile: But with all formulas, automations and a little help of @Chr1sG we can create everything in Fibery. Which is really awesome and a USP for us!

1 Like

Using Fibery as our semi-structured knowledge base for retrieval-augmented generation (RAG) with large language models (LLMs).

I run an agency that’s mostly focused on building apps powered by generative AI. We’ve found that as good as base models like GPT-4 are, their real potential only shines when you use a sequence of carefully constructed prompts supplemented with contextually relevant information.

That is to say, maxing out 8k token window with very detailed and very relevant information to answer a specific type of question or perform a specific task can get amazing results.

We’re doing this by using a combination of Fibery via GraphQL API to curate and maintain databases of various information (SOPs, FAQs, meeting transcripts, etc.), Qdrant as our vector database / semantic search, and OpenAI GPT-4 as the language model to put it all together.


It would be interesting to read some high-level flow how it works :slight_smile:

We’re curating text with a specific intent in Fibery. Contextual data suitable for entities (like Q&A’s, FAQs, service definitions) is stored as entities in databases. Meanwhile, unstructured long-form data such as SOPs, long-form methodologies, and our “About Us” are kept in a Doc view. This serves as the primary information source for the LLM.

We also gather supplemental context data. This data may not have a specific purpose but is relevant to our agency’s operations and aids in bridging the LLM’s “knowledge cutoff date” gap. We’re considering a 6-month expiration date for this data since its role is to provide recent context. Examples include meeting transcripts, podcast and YouTube video transcripts, research papers, news articles, blog posts, etc.

We maintain a sync process that parses, embeds and saves the text to a vector database. But for those with a Postgres database handling about ~100k vectors, the pg_vector extension will likely offer satisfactory performance.

For retrieval-augmented generation, we use a custom SvelteKit + Flask app for complex tasks like generating proposals. We also have specialized/experimental apps developed with Python Streamlit for tasks such as producing meeting summaries, social media content, creating Trello cards for issues/bug reports, and more that I am probably forgetting.

Looks something like this: