This may be solved already, but I’m looking at Claude waste compute guessing it’s way around what field names will return the desired info and that reminded me that fibery.schema/query returns the whole workspace.
Can we allow query parameters so we only get a specific space or database (type)?
Plus a parameter to only return spaces + databases the user/ai agent has access to in one way or another would be nice. I think it’s related to this, but can split into a separate request if you think it’s different.
Right now if you just need one space or one type, fibery.schema/query makes you pull the whole workspace schema and then filter client-side. That is fine once, but it gets wasteful fast for MCP/codegen flows where the model keeps re-reading schema just to resolve a couple field names.
Even a narrow filter like space, type, or a small include/exclude list would be enough. It would cut token usage, reduce round-trips, and make schema diffing a lot cleaner.
The schema is public, so this is not really possible - there’s a difference between which entities the AI (or any user) is permitted to see, and knowing what fields each of the databases has or what space they live in.
If you have a lot of spaces, but you only give editor access to one space to an AI, it wastes a lot of tokens. Hence, still stay public, but allow a parameter for “only ones I have access to” or something.
If you are suggesting that it be possible to pass a parameter to the getSchema query identifying a specific user, with the aim that the response would be limited, it would seem to imply that, behind the scenes, when Fibery receives such a query it has to dynamically query every database in every space in order to determine if this user has any access to one or more entities in the db. This would imply a massive set of subqueries, so although it would limit the size of the schema info returned to the AI, it would put a serious load on Fibery.
I think @helloitse’s suggestion of parameterising the query to only return specific spaces/dbs is plausible, but a per-user schema response is very unlikely.
As it happens, I suspect that the cost of AI tokens will go down, and context windows will go up, such that returning the whole schema becomes not such a big deal rather soon. Plus, the models will become ‘smarter’ at not re-requesting it.
I think it makes little sense for Fibery devs to work on optimisations when AI improvements will likely have a greater effect, sooner.
AI is not the best use case for this request. It’s a good one and a popular one.
But the request exists because I’ve spent years lightly irritated at having to sort through the large payload of a whole workspace when you need the configuration of one database or space. Most of us have built our parsing process for our dev environments and AI hasn’t.
It’s preferable not to have to cache the workspace config or download the whole thing each time to later parse. That’s the request.
AI compute/costs were just a reminder of this longstanding issue.
This would be really useful for our compliance tracking workflows in Fibery. We use the API heavily to pull schema definitions dynamically as part of an automation pipeline, and right now having to make two calls (one to get schema, one to apply parameters) adds latency and complexity.
If parameters were supported directly in the Get Schema command, we could simplify the pipeline significantly - especially for multi-workspace setups where the schema varies by workspace configuration.
One specific use case: we have a webhook listener that reads the Fibery schema before deciding how to route incoming data. Currently that requires a static schema cache that gets stale. With parameterized schema queries we could do it live without the overhead.
Would love to see this on the roadmap, even if it is a minor API version bump.
@Chr1sG I agree with your angle about what the fibery.schema/query is.
But, @helloitse and @RonMakesSystems request is not invalid. The command containers already exist for Type (Database) and Field queries, but from what I see they only allow, the C, U & D form CRUD on type and field level, leaving read to the entire schema level as shown below.
So maybe the fibery.schema/query does not need to change given its pivotal place in the ecosystem, but the granular read operations can both increase efficiency, and reduce context overload/cost in the case of AI non-MCP* API use.
Create:
schema.type/create
schema.field/create
Read:
fibery.schema/query
maybe add more granular queries to return one or more sub-schemas just as fibery.entity/query returns one or more entities:
schema.type/query
schema.field/query
Update:
schema.type/rename
schema.field/rename
schema.field/set-meta
Delete:
schema.type/delete
schema.field/delete
* The MCP schema and schema_detailed MCP tools already do this, to avoid context overload. The first one returns just the spaces and DBs, and the second one the Fields and their Field Types, but nothing returns the full schema of a Field or DB, as far as I know.
explicitly limit what spaces/dbs to return schema info about
limit schema info based on the access levels of the querying user
In case it wasn’t clear, I think the former is a good idea that we could (and ideally will ) implement, but the latter would imply some significant changes to Fibery’s technical foundation, and is therefore highly unlikely to happen.
As for how we might implement it, maybe @Sev’s suggestion makes sense, but that’s something for our devs to look into.
First of all, we are working right now on moving API docs to a separate resource (Readme, Mintlify or anything like that). We will try to improve content as well.
Also we are working on introducing better API for working with schema (database creation, field creation, etc.). End goal is to make schema API easy-to-use for people and AI.
We will definitely take into account your requests here. Our own AI agent and MCP server are actually using some internal libs to retrieve only needed parts of schema in a more token efficient way. We will think how to expose such possibilities to public.
Thanks @Chr1sG for the clarification, I agree with you on both counts.
Thanks @Sergey_Truhtanov and good luck! We look forward to it, for now, I will use the MCP endpoints in some cases instead, after all they just API end-points, anyway.