backstage: [Search] Be less restrictive with unknown keys on query endpoint
Feature Suggestion
Enable additional/custom params to Search query at GET - /query by implementing a minor change on:
/plugins/search-backend/src/service/router.ts#L75-L82)
Possible Implementation
It could work by adding an additional property at requestSchema so that people can pass any custom/additional params to the search query object, such as.:
const requestSchema = z.object({
term: z.string().default(''),
filters: jsonObjectSchema.optional(),
types: z
.array(z.string().refine(type => Object.keys(types).includes(type)))
.optional(),
pageCursor: z.string().optional(),
customQueryParams: z.array(z.record(z.string(), z.any())).optional()
});
on /plugins/search-backend/src/service/router.ts#L75-L82)
and on /plugins/search-common/src/types.ts
We could define the interface as
interface SearchQuery {
term: string;
filters?: JsonObject;
types?: string[];
pageCursor?: string;
customQueryParams?: Record<string, any>[]
}
Context
At Expedia Group, we recently upgraded our Backstage instance to a more recent version, in which the added requestSchema validator is causing some conflicts in our implementation. In our particular case, we make use of ElasticSearch aggregations feature to return some stadistics of the query results, apart from few other queries. Moreover, as many people implement elasticSearch as their preferred searchEngine, by adding an extra param on the ResquestSchema as a customQueryParams:Record<string, any>[], for example, will allow everyone to add extra custom queries that were previously sent through it and had not gone through any schema validation in previous versions. Nowadays, with the current one, if we implement it with any other necessary param , it automatically deletes it due to the requestSchema .
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 15 (15 by maintainers)
Hey again @pedronastasi! Thanks for illustrating in more depth. Wwhat a great user experience!
This conversation happened to align with an architecture ritual we have internally, and we were able to discuss this in a little more depth. The main takeaway from the conversation is that: while we would like to support aggregations (and many other things) in the search query and result interfaces, we don’t feel we (as maintainers) have capacity to help shape / drive that conversation just now. …We also don’t feel like it’s fair to push the process of developing and proposing a solution onto members of the community either. …We have a sense that making this generic across search engines will be challenging, and a variety of alternatives have pros and cons that need to be weighed.
…With that in mind: our primary concern right now is to unblock y’all!
So (and, sorry for the whiplash), we think the best approach for now would be to make that zod check less restrictive. For context: that schema check was put in place at the same time as permission support was added to search, and as far as I know, it’s there primarily as a guard against unauthorized access to various document types.
I can’t think of any harm in allowing unknown keys to pass through (other than the risk that, anyone passing such keys would be going outside the bounds of the defined interface and that any custom functionality built on those keys would not be guaranteed to continue working in the future, e.g. if we added and started using such a key on the interface). (\cc @mtlewis, keep me honest, here).
Would you be willing to take a crack at updating that schema validation to allow unknown keys to pass through, @pedronastasi?