Names vs IDs reasoning

What’s the reasoning behind using Space names, field names, etc in Fibery URLs and within the API? That was a new experience for me, and it means that we need to be dead set on our space and field names before we put any APIs and 3rd party automations into production. Love to hear the team’s thoughts behind that. I will say that it makes writing API calls much easier than in other products.

2 Likes

Hey @Chr1sG, sorry to ping you directly; do you have any insights on this? As I move more and more of my systems over to Fibery, I’d like to understand the reasoning about this better, so I can make my external automations more robust to naming changes and ID reference conventions.

Hey there,
Sorry that no-one has offered any reply before now.
In terms of the formation of the URLs, I can’t give a useful answer as to why the construction (using names) was chosen, but I am interested to hear why you have concerns regarding APIs and 3rd party automations. In general, these can be designed to be independent of the space/database names, but if you have concrete examples of where things will break if space/database names change, I’ll be interested to hear them, and can perhaps try to offer some suggestions to make them more ‘change-proof’.

1 Like

Hey @ChrisG! No worries for the delay, thanks for engaging!

Haha, yeah, no idea about the URLs either, those things are wild.

For the API, I’m using n8n (workflow automations) and their HTTP module (basically, just sending HTTP requests in a GUI). In pasting the POST body below, you can see that if I change any of the Space, Database/Type, and Field names, this API call and every other automation that uses those Spaces/Databases/Fields will also stop working.

[
  {
    "command": "fibery.entity/query",
    "args": {
      "query": {
        "q/from": "FranDev/Discovery Process",
        "q/where": ["=", ["fibery/id"], "$discoveryProcessId"],
        "q/limit": 1,
        "q/select": [
          "fibery/id",
          {
            "Tallyfy/Tallyfy Processes": {
              "q/where": [
                "=",
                [
                  "Tallyfy/Process Template",
                  "Tallyfy/Feature Set",
                  "fibery/id"
                ],
                "$franDevProcessFeatureEntityId"
              ],
              "q/limit": 1,
              "q/select": [
                "fibery/id",
                "Tallyfy/Process ID",
                {
                  "Tallyfy/Process Template": [
                    "fibery/id",
                    {
                      "Tallyfy/Template Tasks": {
                        "q/where": [
                          "=",
                          ["Tallyfy/FranDev Process Feature", "fibery/id"],
                          "$franDevProcessFeatureEntityId"
                        ],
                        "q/select": [
                          "fibery/id",
                          "Tallyfy/Task ID",
                          "Tallyfy/Task Alias"
                        ],
                        "q/limit": 1
                      }
                    }
                  ]
                }
              ]
            }
          }
        ]
      },
      "params": {
        "$discoveryProcessId": "{{ $('Webhook').item.json.body.fiberyProcessId }}",
        "$franDevProcessFeatureEntityId": "{{ $('Webhook').item.json.body.franDevProcessFeatureEntityId }}"
      }
    }
  }
]

Is your workaround to use the schema.query API function and like, iterate through the data types there for every workflow I create to dynamically find and use whatever the Entity Name is as that point? I can’t think of what else I’d do.

Is there not a publicly accessible static and unique ID for every Space/Database/Field for Fibery? I haven’t been able to find one, but if there is, then this is no longer an issue.

Each ‘object’ (space, database, entity, view, etc.) does indeed have a unique ID. In the case of databases, this can be found via a schema query.
You wouldn’t need to iterate through the schema on every execution, but rather you could query once and generate a ‘dictionary’ which you can then make use of when coding each workflow.

Out of interest, what are your reasons for choosing n8n over Zapier/make ?

Oh my HECK :man_facepalming: How did I not see those??? Ok. Those ‘object’ fibery/ids can’t be used directly in the API, right? But like you say, I can do a schema query and build a dictionary on the fly, which will totally work for me. No, wait, your message says I wouldn’t need to query on each execution. Can they be used instead of the Public Field Name in the API?

So, now, a question of caching and invalidation (just to avoid having to call or at least parse the Schema on every request). How can I know if the schema has changed?

I see a couple of ambiguous fields on the Schema API response, like below. What are these?

{
  "success": true,
  "result": {
    "fibery/version": 2290,
    "fibery/types": [...],
    "fibery/meta": {
      "fibery/version": "1.0.79",
      "fibery/rel-version": "1.0.8"
    },
    "fibery/id": "UUID"
  }
}

Outside of n8n, I have lots of experience in Make.com and not as much in Zapier. Anytime I’ve used Zapier, I haven’t liked how limited it is, especially what I perceived to be VERY limited in its branching abilities.

Make.com is great - I really like their editor and their error/warning resolution features. However, it doesn’t have great support for looping, and can’t do multiple triggers or run actual javascript code.

n8n does all these things pretty well: looping, multiple triggers, (except error handling - it requires you to sort of build that out more, but then is a lot more capable of dealing with exceptions on its own). Being able to run JS code in a node is also a HUGE plus. I originally started using n8n for my home projects, because I was running into the usage limits on Make and didn’t want to pay. n8n is open source and self-hostable, and the community self-hosted version gives you infinite workflows, nodes, and operations for free.

n8n has been adding a number of really cool Enterprise-only features to their cloud and self-hosted offerings, which is too bad for hobbyists but really nice to see those features implemented.

I just meant that you could do an initial schema query and save the results (on pen and paper if you wanted!) so you know what are the Uuids for databases X, Y and Z.
Then on any execution which affects db ‘X’ you can query the schema and filter-match for its Uuid to get the current space/database name (no need to iterate) so that it will still work even if X is renamed to A.

1 Like