Hi, is it possible to create a new space/app programmatically?
No, but creating new databases and fields is possible.
Thank you! That’s helpful.
Where/how can I submit this as a feature request?
Well, I think you can just convert this topic into the ‘Ideas &features’ category, and it will become ‘votable’.
Having said that, it would be really useful if you could describe the use case you have in mind, so that if/when the feature gets delivered, it provides you with something that truly meets your needs.
I have a dataset that lives somewhere else, outside of Fibery. I want my co-workers to be able to export that data, or a subset of it, into an ad-hoc temporary Fibery space, via a tool that’ll effect the export by creating and populating the databases and will also create some predefined views for working with the data. I don’t want the users of the tool to have to manually create a new space every time; I’d like the tool to do that for them automatically as part of the export process, and to write out the link to the space as its final step.
Does that make sense?
Actually I don’t see a way to do this? Where can I find it?
Maybe you don’t have permission - I’ve done it for you.
What is the reason for wanting to programmatically create the space/databases/views as opposed to just manually creating them once? Is the data structure different with each import batch?
Could you allow your co-workers to import into a single Fibery space/database, but add a field that allows users to uniquely identify what data was imported in which batch? Then you could create a smart folder based on this identifier, so that you could review/analyse/edit data on a batch-by-batch basis.
Finally, why do you consider the created space to be ‘ad-hoc temporary’. Do you plan on doing stuff in Fibery and then deleting the data immediately afterwards?
Because I want them to be short-lived, for temporary usage.
It will be, sometimes; the “source of truth” is an external/upstream dataset and its schema will evolve over time. The export script will therefore need to evolve along with it. And it’ll be easier to have the script recreate the schema from scratch every time, rather than try to propagate the changes downstream to one or more Fibery spaces.
I suppose I could, but this wouldn’t help with schema changes. And anyway I prefer the different “batches” to be in discrete spaces, because we’ll be using some of these views to present some information to clients, via screen-sharing, and we don’t want any clients to accidentally see any other client’s names, and those would be included in the names of the smart folders.
Yes, that’s what I’d like to do. As I said, the source of truth — the place where we’ll be creating and maintaining the data over time — will be external and upstream. And rather than try to sync changes to that data downstream to Fibery, I think it’ll be easier and faster, at least in terms of implementing the data pipeline, to just wipe out the downstream data and re-publish it.
It might help to frame it this way: for this particular use case, I want to use Fibery as a presentation layer. I want to find the fastest way to experiment with that. Implementing export without sync seems to me to be the fastest way to get started. But if you have other ideas I’m all ears!
OK got it.
Makes sense, but might they not also see the other named spaces?
That’s interesting to hear. It’s not often that users want Fibery to be the presentation layer for external data.
With this in mind, it’ll be interesting to hear what views you think/discover are missing (but that’s a story for another day I guess!)
Until we can support space creation via API, I’m not sure what to recommend I’m afraid, but I’ll mull it over and get back to you if I have any bright ideas.
Not if they’re ephemeral!
And/or maybe we could use the hide space feature to keep most of the spaces hidden most of the time.
I guess you mean, what views I think are missing, such that I can’t use Fibery as the “source of truth” for the data.
If so, it’s not really so much about missing views, as it is about my org having a very specific and somewhat esoteric philosophy about data. We’re led by software developers and we deeply value having a managed change process for our code (branches and pull requests) that reifies changes and supports discussions of those changes and go/no-go decisions for those changes. We value this so deeply that we’ve come to believe that we would benefit greatly from having a similar managed change process for our data. So we’re experimenting with maintaining our data in text files in GitHub repositories.
I understand. Thank you!