Is there a reason that Fibery canât be used for big organizations? I mean, a fundamental one, like âmaking it scalable to thousands of users would require a different architecture or would require too much code to be rewrittenâ. Not saying that is the case, just thinking of random reasons, hope the answer âFibery is that it is easy to scaleâ
The reason I am asking is that I think it makes sense for Fibery to make this possible to use Fibery in big organizations at some point. Small and medium sized companies can become large ones. What if you have gotten spoiled by all the beautiful features of Fibery, and then you grow too big for Fibery and you have to go back to Jira?! Very sad day
I have to work with Jira every day for my freelance work (very painful experience ), and I know it could be so much better with Fibery! What if you could offer Fibery as a Jira alternative for big organizations, that would reduce so much suffering and create so many opportunities! People could really customize their tools around their processes, even in big organizations
Our largest usage is ~700 users now in a single company. In general Fibery is ready for 1-2K+ users in a single instance for sure, the problem is organization complexity, how you configure it and how support changes. It takes effort, but it is the same with tools like Jira, in large companies you usually have a team which is doing it.
In the past I have encountered problems with max number records in a DB before things would start to break (like getting timeout messages instead of seeing a table, or formula fields that did not work). There are probably many factors that determine where these limits are for a particular DB.
It is also possible that this situation has improved - I know that many changes have been made to the Fibery back-end to handle such limits more gracefully.
Another example of a Fibery issue that makes it unsuitable for large organizations, again stemming from its not being designed to handle large datasets:
Add a rule to automatically set serial in newly created entities to a something like [Public Id] % 10 (javascript is needed because thereâs no modulo operator in formulas).
Duplicate the original errored Rule TEN TIMES and make each one filter for a specific value of serial.
Now I have a giant mess of new Rules to maintain - multiplied by all the different places where problems like this crop up.
Probably need to move to the extravagantly expensive Enterprise plan to handle all the additional Rule executions (minimum USD $1000/$1250 per month, billed annually/monthly).
Do you have a specific use case in mind (that is particular to a large organisation) for which a repeating scheduled automation that processes 200k entities at one time is essential?
For what itâs worth, I can possibly think of some workarounds that donât require you to maintain 10 automations, but it depends upon how frequently the entire dataset (200k entities) needs to be processed.
One full cycle per month? per week?
Due to multiple issues with large datasets, I was forced to take a âmonthly batch updateâ approach with syncing my custom integration, because so many manual steps and external scripted API operations are required to make things work.
This results in a large number of records (a monthâs worth) subject to âcleanup operationsâ like this one.
When syncing was done daily this cleanup was simple, because only a dayâs worth of outdated records would need to be cleaned-up at a time.
But now I have to sync and then cleanup an entire monthâs worth of data, which can easily include more than 20k old records to delete in a single DB. So it becomes just one more step that needs to be done manually/externally via scripting the API (graphQL).
And now even that system is broken due to the new automation caps, unless my client decides to pony up $1250 per month for the starship Enterprise plan.
Thanks for your repsonse @mdubakov - Good to read that itâs being used by such big organizations already and it can work with 1-2K+ users!
@Matt_Blais - I remember you telling me about this earlier, but I am really curious about how you get so many automations. I hope you can work something out! @Chr1sG always has good ideas on that!