grpc: Ruby client hangs when connecting to cloud datastore with the Puma webserver
After updating gcloud-ruby (to v0.9.0) to use cloud datastore v1beta3 the ruby client hangs when connecting to the datastore. We are using gcloud-ruby in a Rails app and can reproduce the hang with both the Puma and Phusion Passenger web servers on Heroku. However, WEBrick seems to work fine on Heroku.
We added the environment variable GRPC_TRACE=all
to capture this trace using Puma.
The datastore dataset is initialized like this:
dataset = Gcloud.datastore name_of_project
and we experience the hang after the first call like this:
dataset.run query
It appears that grpc connects to the server, completes a TLS handshake, and starts initializing an HTTP2 stream. And then it suddenly stops, and ~20 seconds later, Heroku kills it.
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 19 (11 by maintainers)
As the problem seems to occur with web servers that fork multiple processes, I tried disabling Puma’s clustered mode (which forks processes) by configuring
workers 0
. This did allow a localhost Rails app connected to an actual datastore to function properly. As I was creating the Gcloud.datastore dataset object in a Rails initializer, I moved it elsewhere which then allowed a localhost Rails app with multiple Puma processes (workers 2
) connected to an actual datastore to function properly.However, this still did not work when deployed to Heroku and running in production mode. One difference between development and production environments is that eager loading is enabled at boot for production environments. I set
config.eager_load = false
in the config/environments/production.rb and deployed to Heroku which does work.Any idea why the Gcloud.datastore dataset object can’t be instantiated in a Rails initializer when using multiple processes or with eager_load enabled?