vimgolf: ActionView::Template::Error (Sort exceeded memory limit of 33554432 bytes, but did not opt in to external sorting.

Using heroku logs -t | grep -i error

We can see a lot of

2021-01-29T21:33:28.922622+00:00 app[web.1]: ActionView::Template::Error (Sort exceeded memory limit of 33554432 bytes, but did not opt in to external sorting. (292)):
2021-01-29T21:33:29.011488+00:00 app[web.1]: I, [2021-01-29T21:33:29.011393 #9]  INFO -- : Completed 500 Internal Server Error in 819ms

Issue linked to #306

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 27 (18 by maintainers)

Commits related to this issue

Most upvoted comments

Looks like M0/M2/M5 instances were upgraded to MongoDB 4.4 in January:

https://docs.atlas.mongodb.com/release-notes/atlas/#26-january-2021-release

They don’t mention tweaks to the limits, but it’s possible that this change made a difference here.

Allowing disk use for sort hasn’t been enabled for a long time, but maybe the behavior of older version of MongoDB would always allow it or use whatever resources are needed to fulfill the query…

I managed to reproduce this locally, on a test challenge with 37,251 entries (I was doing some benchmarking a while ago) and reverting the allow_disk_use commit in my local repository. I see it fail on the query for best_player_score(). I looked at whether it was possible to add more indices to alleviate the issue, but since that query creates a new calculated field min_score and sorts on it, I’m not sure how to address that…

Long term, I think #298 is the path forward. I have looked at that in the past and I’m happy to try to put some more time in trying to wrap that up. There’s, of course, a migration to happen for that. But I’d be happy to look into all of that. What do you say?

Fixed after migration to Postgres (#298).

Looks like having all “entries” at root level seems to fix the problem. See https://github.com/igrigorik/vimgolf/pull/315

Still lot of work to confirme, but It gives me hope

A short-term fix is to upgrade or move to a service that supports allow_disk_use. Besides Atlas, do you guys have any recommendations for hosted Mongo?

This might be a limitation of MongoDB Atlas Free Tier (M0) or shared clusters (M2, M5)… You might need a dedicated M10+ instance for that feature.

https://docs.atlas.mongodb.com/reference/free-shared-limitations/#operational-limitations

“Atlas Free Tier and shared clusters do not support the allowDiskUse option for the aggregation command or its helper method.”

This did seem to work fine. I did add the allowDiskUse at some point, but only because I was hitting it in some of the extended queries, that we really didn’t end up using since they would time out anyways… So not sure what changed that started triggering this often enough on any challenges with 500+ entries.

I was going through MongoDB docs looking at how to figure out what it’s doing (“explain” a query) and try to figure out whether there’s a way to optimize the queries to overcome this issue, but unfortunately haven’t had time to fully go into this.

It looks like challenge with roughly less than 500 entries is ok