hydra: Duplicate key value violates unique constraint

Describe the bug

For our project we are using Hydra and a PostgreSQL database. Sometimes when trying to get an access token calling the /oauth2/token endpoint, Hydra returns an 500 Internal Server Error and logs the following error message:

level=error msg="An error occurred" debug="pq: duplicate key value violates unique constraint \"hydra_oauth2_access_request_id_idx\": Unable to insert or update resource because a resource with that value exists already" description="The authorization server encountered an unexpected condition that prevented it from fulfilling the request" error=server_error

Hydra seems to try to add an access token request to the database with a request id which already exists. What we could observe is that every time this happened multiple (2-3) token requests have been received in a very short time period (<100ms).

Did you experience this at some point or do you have any hint what could be the root cause for it? We are using an older version (v1.0.0) but I couldn’t find anything regarding this in the changelog.

Reproducing the bug

We could not reproduce the bug not even when sending ~500 token requests (almost) simultaneously.

Environment

  • Version: v1.0.0
  • Environment: Docker, PostgreSQL DB

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 22 (17 by maintainers)

Commits related to this issue

Most upvoted comments

That’s an interesting approach, although currently fosite doesn’t have understanding of what an isolation level is. Maybe it should? 😀

Yeah, I think that’s ok. There’s actually so much implicit knowledge already when it comes to the storage implementation that I gave up on imposing that in fosite to be honest.

I think setting the appropriate isolation level in begintx in hydra would be a good solution, plus having a failing test to verify the implementation.

Yeah, 409! 😃

I think Repeatable Read has an acceptable trade-off for refreshing the tokens. Since that operation doesn’t happen often for a specific row (which if I understood correctly will be locked, not the entire table).

I think the test case makes sense. One refresh token should be enough though. You can probably fire n go routines that try to refresh the token concurrently and you should definitely see the error.

Regarding isolation - this really is only required for things where a write is dependent on a previous read. I’m not sure if setting RepeatableRead for every transaction is performant? But then again, the performance impact is probably minimal while the consistency is much better. Maybe MaybeBeginTx should always request RepeatableRead isolation.

I don’t think so. I’ve also looked into this a bit more. We can’t reliably echo the same response twice because that would mean that we have to cache the full valid access token somewhere (e.g. database) which we explicitly have said we won’t do.

What you could obviously try is reduce the latency between hydra and the DB - 130ms seems a lot to me.

Yeah, that’s a docs issue. trace should work but you can also use debug if you want. The error messages, if some occur, should be enhanced with stack traces.