graphql-dotnet: wrong depth calculation with fragments
This issue participates in bounty program, see https://github.com/graphql-dotnet/graphql-dotnet/blob/master/BOUNTY.md
bounty amount: $100
Tasks to be done:
- Complexity analyzer should be fully documented (xml comments).
- Add documentation page regarding comolexity analyzer with explaining how calculation is performed, including setup instructions, and examples of common configurations.
- Update sitemap.yaml and include page into solution tree.
[ORIGINAL POST BELOW]
I think the depth calculation has a bug, it adds every used fragment to the total sum.
I have a query which only has a depth of 4 levels, but… Query is too nested to execute. Depth is 12 levels, maximum allowed on this endpoint is 10.
‘X’ Fragment used 5 times with a depth of 1 ‘Y’ Fragment used twice with a depth of 2
Example:
query myQuery($id: Int) {
data(id: $id) {
a1
a2 {
b1 {
c1
}
}
a3 {
...Y
}
a4 {
...Y
}
}
some1: someList(byType: 1) {
... X
}
some2: someList(byType: 2) {
... X
}
some3: someList(byType: 3) {
... X
}
some4: someList(byType: 4) {
... X
}
some5: someList(byType: 5) {
... X
}
}
fragment X on FirstGraphType {
a1
}
fragment Y on SecondGraphType {
b1 {
c1
c2
}
}
About this issue
- Original URL
- State: open
- Created 7 years ago
- Reactions: 1
- Comments: 17 (12 by maintainers)
Nope, no deepest, it’s sum of all depths. Each field (as is or from fragment) counts -
TotalQueryDepthincremented by 1 every time theComplexityVisitorvisits field.Just a problem of naming. I agree that ‘total nodes’ sounds more clear. Most likely Initial author considered the number of nodes in the context of the depth so… We can make naming even simpler:
#298 / #1254
My intuition tells me that this task should be postponed, because ComplexityAnalyzer can be changed a lot. Even if we rename properties now, it will not change anything fundamentally, and the comments can be added after the v4 release. I am not familiar with ComplexityAnalyzer so that with 100% confidence to write comments and documentation. I do not want to delay the v4 release because of this so I removed this task from 4.0 milestone.
I’m having the same (or a similar) issue - the calculated depth increases as I add more fields even though there are no recursive relationships and the apparent depth remains constant.
Did this ever get resolved? Or am I just misunderstanding what “depth” means in this context?
The following returns a depth of 38. (
countandyearare integers)Yes. This is precisely the sum of all the depths in terms of AST, and not the maximum depth of the most nested branch of AST. I rewrote complexity analizer in #2863 . Honestly, I forgot about this issue. Regarding my https://github.com/graphql-dotnet/graphql-dotnet/issues/344#issuecomment-797060698 - now I better understand all this stuff to make proper changes in comments or/and rename properties. Total word may be confusing. I mean both complexity and depth terms work as total numbers (see
three_depth_query_widetest) but only one (TotalQueryDepth) has total in its name. I think that I would remove Total word fromTotalQueryDepthproperty at all (and I feel that Query word may be removed as well).So I suggest this:
At the same time Total meaning should be described in xml comments for both
ComplexityandDepthproperties since they both accumulate total metrics. Also ping @Shane32 for comments.I’m in the same boat.
The xml comments are extremely basic; indeed I wrote them myself, just so there was more than nothing. But I have not used the complexity analyzer, and I don’t fully understand what the properties mean.
I suggest we add significantly more text to the xml comment, and then assess if the properties need to be renamed and what would be proper.
At first glance, I would think the names in the configuration should match the names in the response, so, for example only,
MaxTotalDepthandTotalDepth. OrMaxDepthandDepth. OrMaxTotalQueryDepthandTotalQueryDepth. The xml comment should make it very clear how these numbers are calculated.