graphql-ruby: Configure complexity on connection fields
We’re exploring rate limiting our API similar to how it’s done at Shopify using a complexity based approach:
- Throttle the request if the requested cost is greater than the apps currently available complexity
- Reduce the apps available complexity for a resolved query based on the actual query cost
With every field having a default complexity of 1, the edges and node field are contributing to the cost calculated by GraphQL::Analysis::AST::QueryComplexity. For example, I’ll get a total complexity of 15 from it for the following query when I was only expecting 5 (5 client nodes with a single id scalar):
{
clients(first: 5) {
edges {
node {
id
}
}
}
}
Is there a way to set the complexity for the connection related fields to 0 other than redefining them in our class that extends GraphQL::Types::Relay::BaseConnection?
As for calculating the actual query cost, I’ve done this by adding a field extension to all of our fields that accumulates the total complexity of all resolved fields. This approach doesn’t include the complexity cost of edges or node which is why I’m exploring setting them to 0. This also seems to be what Shopify is doing based on the values I’m getting in their throttle cost extension when experimenting with their API?
Any help would be very much appreaciated!
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 15 (7 by maintainers)
I took a try at the change described above, but found it was too hard to reliably implement.
nodesandedgesmight have different subselections, and it’s hard to efficiently implement a reliable check for whether their selections match exactly or not. For clients who are pushing complexity limits, they can always useedgesfor everything thatnodeswould have provided.The change above will be in 1.13.0 👍 !
@crpahl yep that’s basically it 👍
For complexity procs, you can do this:
I found myself needing the ability to break down queries to help understand where they’re growing too complex, and in the process I found that it’s very useful to be able to separate the page-multiplied scopes from the non-multiplied scopes, rather than trying to reverse it after the fact.
I’ve modified the QueryComplexity analyzer to track complexity for paged child keys (nodes and edges, by default, though it’s configurable) separately from other keys, then those complexities get passed to
calculate_complexity, which makes the actual complexity calculations quite a lot simpler.The changes are here, for those that might find them useful. It breaks the
complexity_forandfield_complexityinterfaces, so it wouldn’t be super easy to pull back upstream, but it does work, and it’s here if others might find it useful. There are probably more intelligent ways to do the paginated fields split, but nodes/edges works for my purposes.https://gist.github.com/cheald/8c9be84ad0dbc80fb41a194f83e23b33
This gives me reliable calculations for tree costs and makes it very easy to reason about the complexity breakdowns:
QueryComplexityWithFieldDetails#reportgives me a nice readable output for the query. So, for example here, I can see thataccountGrantshas a cost of 1907, which breaks down to a page size of 100 * a paged complexity of 19, plus a non-paged complexity of 6, plus the accountGrants complexity of 1.(Oh,
nodesandedgesshould probably count for1each, not1 * page_size)I took a try at it in https://github.com/rmosolgo/graphql-ruby/pull/3609, could you take a quick look and see if that looks like what you had in mind?
I may have to touch it up in order to merge it to 1.12.x – I don’t want to change the default behavior on that until 1.13.0.
I’m open to it, though personally I’m not that convinced of complexity. In practice, I find it hard to guess exactly which fields / under which conditions they will be resource-intensive. And when implementations change, how to keep the complexity configuration up-to-date? We have a server process timeout that ends up terminating long-running queries 🙈…
Anyhow, I’ll take a look soon and follow up here 👍