graphene: Error handling
We have decided to approach error handling like suggested in many blogs, by having errors defined as fields rather than raising exceptions.
The thing with this is that we need to make sure we catch every exception that we want to handle and somehow keep track of them to be able to inform them.
So we came up with:
class BaseObjectType(graphene.ObjectType):
class Meta:
abstract = True
@classmethod
def __init_subclass_with_meta__(
cls,
interfaces=(),
possible_types=(),
default_resolver=None,
_meta=None,
**options
):
super().__init_subclass_with_meta__(
interfaces, possible_types, default_resolver, _meta, **options)
for f in cls._meta.fields:
if f in ['errors']:
continue
resolver_name = "resolve_{}".format(f)
if hasattr(cls, resolver_name):
setattr(cls, resolver_name, catch_errors(getattr(cls, resolver_name)))
class BaseResponse(BaseObjectType):
class Meta:
abstract = True
errors = graphene.String()
@staticmethod
def resolve_errors(root, info, **kwargs):
operation_name = info.path[0]
error_key = f"{operation_name}_errors"
if not root.errors and error_key in info.context.errors:
root.errors = ",".join(info.context.errors[error_key])
return root.errors
catch_errors just populates info.context.errors for each operation
While implementing this approach for error handling I have also found an issue when resolving the error field. I have a base class that extends ObjectType by iterating over the resolvers and wrapping them to n catch any errors and setting an errors dict in the info.context.
Problem is that when resolving a type field that throws an exception, I do catch it and set the error, but the error field was already processed/resolved. Is there a way to force a particular field to be processed at the end or to force it to update? I need to make sure every resolver runs before the error field is resolver so I make sure all errors are caught.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 4
- Comments: 31 (7 by maintainers)
Bump on this issue.
Error handling is a big deal when implementing any API and graphene’s middleware doesn’t offer any obvious way to do this.
A very common use case is catching all exceptions and stripping any internal error handling. This is crucial for security purposes: it’s not uncommon for stringified exceptions to have PII and other sensitive information.
The only way to do this in graphene is to decorate every resolver with error handling functionality or using the metaclass approach described above.
Ariadne, by contrast, treats this as a first class use case:
https://ariadnegraphql.org/docs/error-messaging
Is proper error handling somewhere on the roadmap?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I gave up on graphql python, error handling is a basic feature
no
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I know error handling is a bit complicated in graphql in general right now. For the purposes of this discussion, I think it’s useful to differentiate between expected error conditions and unexpected error conditions.
While it might make sense to swallow and format expected error conditions, not differentiating between something like an explicitly raised
GraphQLErrorand an unexpectedException/ValueError/KeyError/etc is unfavorable for a few reasons:GraphQLError’s andGraphQLError’s that were coerced from a different underlying exception.pytest --pdbis a fantastic tool for understanding and debugging issues. When unexpected exceptions are swallowed,pdbbecomes significantly less useful. While you still have a reference to theoriginal_errorburied in the execution result error list, there’s a lot of friction in actually getting at any useful data from it. At a minimum you have to: After the 50th time typing variations of that line it gets a little old. But even with this boilerplate, you don’t get a nicepdbshell with frame navigation, variable inspection, etc.One common approach for error handling in graphql is to make expected errors part of the schema design. Even doing that, though, there’s a handful of “expected” errors which are (correctly) raised by
graphene/graphql-coreand don’t neatly fall into that bucket.While not as verbose, a very similar issue was sort of already raised in on
graphql-core. The proposed resolution was optionally raising exceptions out of the library resolvers, butoriginal_errorwas implemented instead. Given the constraint ofgraphql-coreneeding to closely align withgraphql-js, I understand it might never make sense to make that change there. Instead, I do think it might be worth making that change here.One way
graphql-coredeviates fromgraphql-jsis that it allows for a custom execution context. From what I can tell, the default execution context is what’s responsible for swallowing exceptions.graphene’sSchema.executecallsgraphql_sync, which accepts the optionalexecution_context_classkwarg.graphenecould extend the default execution context to remove the exception swallowing code. This new execution context could either be used as the defaultexecution_context_class(with the option of manually specifying thegraphql-coreerror-swallowing context as a kwarg), or it could even just be provided as an optional class which can easily be added toSchema.executecalls and referenced in docs.With exceptions not being swallowed at the lower level, it becomes much more obvious how to catch and handle them at the higher levels (which expose views), etc. For example, stripping internal errors can then be done the same way it’s done for other views served by whatever web framework exposes the graphql api.
The way all these libraries interact is a bit complicated so it’s hard to tell if I’m on the right track here. If this does seem like the right approach I’d be happy to file a PR with this change.
@jkimbo What do you think?
Indeed coming from
graphql-jsit’s hard to believe there is no proper solution to customize error handlingThis is possible! You need to wrap the resolver like such
Then you use CustomMutation as class that all your Mutations will subclass. This wraps the resolver and catches any exceptions. In my own implementation I have a custom Exception class that i’m looking for. But thats up for you to decide
It is actually possible to catch and handle resolver errors without needing to decorate or use a custom sub class for each every query / mutation. You do still need to use a custom meta class similar to others suggestions, but it only needs to be used when you build your schema. E.g.
Then use this class to build your query and mutation objects. E.g.
This clearly isn’t an ideal solution, but it does obviate the need to replace every ObjectType and Mutation individually.
Sorry for the delay. #1255 has been released in v3.0.0b6
Still an issue
@clin88 I agree that error handling is not handled (pun not intended!) well at the moment in graphene. There are no plans as far as I’m aware to improve it though because it feels like something that could devolve into configuration soup trying to cover everyone’s particular use case.
I am very open to hearing suggestions on how it could work though. Do you have any ideas?
@dspacejs You should be able to setup your python logging config to silence the errors in testing and development if you wish.
Does anyone have a workaround for this in the meantime? All exceptions raised are printed to the console during testing/development.
Some packages such as
django-graphql-jwtraise exceptions for things like user authentication. Those kinds of exceptions are just for the API user to see, so it’d be nice to be able to raise exceptions without them polluting the console output.@BossGrand yes, for mutations it works, and I solved it in the same way, but for queries doesn’t work. That’s the main problem, having to check errors differently for mutations and queries is not a good user experience.
So to keep them consistent we need to have a errors field for queries as well, if the root response cannot be extended, then subclassing ObjectType and adding an errors field, and use the custom ObjectType as the base type for every query result.
Queries may fail, but also, the resolvers of each field from the query response may fail, so you need a way of catching any of those errors, and some how populate the error field with them. Problem is you don’t control when fields are resolved, thus cannot force the error field to be resolved at the very end to make sure all the other resolvers were executed and catched (if they errored).
Its a bit confusing, but hopefully you got the idea.
Ive been looking at this issue for my current predicament. Would it make sense that instead of dropping the catchall Exception handler, like UnforgivingExecutionContext does, and allowing for passing in a custom exception handler that would allow a developer to at least intercept the exception for logging purposes, or just breakpointing in their own code?
I posted this in another thread but you guys might be interestead as well, it’s a small validation library on top of Graphene that follows Django Rest Framework’s structure (it does not require Django of course 😃 ): https://github.com/chpmrc/graphene-validator. Probably not ready for production use but I think it only needs some polishing. It does exactly what some of you have been trying to do (field level validation, even with nested inputs).
Let me know what you think!
So, just for the record, @bclermont I went with the decorator approach, catching every possible error on both queries and mutations (using a base class for both that wraps every resolver). In the error handler decorator I just catch, format and re-raise errors and finally in the global error handler I decide what data to output in the
errorspayload, depending on the type. I gave up with the errors field approach as is not possible to use it for queries.