rails-assets: Frequent timeouts

Our CI system recently encounters very frequent timeouts when installing gems sourced from rails-assets.org, here is one error message example:

Gem::RemoteFetcher::UnknownHostError: timed out
(https://rails-assets.org/gems/rails-assets-iso-currency-0.2.7.gem)
An error occurred while installing rails-assets-iso-currency (0.2.7), and
Bundler cannot continue.
Make sure that `gem install rails-assets-iso-currency -v '0.2.7'` succeeds
before bundling.

The gem that gets timeout varies from time to time, and most gems from rails-assets.org would be installed, but roughly 60% of the time there will be a timeout and fails the CI. Is there anything that we might have been doing wrong, or any way around this that you could suggest? Thanks a lot!

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 4
  • Comments: 18 (11 by maintainers)

Most upvoted comments

@hut8

$ curl --silent --verbose --output dev/null rails-assets.org
* Rebuilt URL to: rails-assets.org/
*   Trying 198.199.120.180...
* TCP_NODELAY set
* Connected to rails-assets.org (198.199.120.180) port 80 (#0)
> GET / HTTP/1.1
> Host: rails-assets.org
> User-Agent: curl/7.51.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Curl_http_done: called premature == 1
* Closing connection 0
# from a bundle update locally
Fetching source index from https://rails-assets.org/

Retrying fetcher due to error (2/4): Bundler::HTTPError Could not fetch specs from https://rails-assets.org/
Retrying fetcher due to error (3/4): Bundler::HTTPError Could not fetch specs from https://rails-assets.org/
Retrying fetcher due to error (4/4): Bundler::HTTPError Could not fetch specs from https://rails-assets.org/

@hut8 thanks for staying on top of it! You guys run a great service 😃.

Might be fixed. I’m leaving it on the secondary now because the primary seems more unstable (although I have no way of diagnosing why other than ā€œSomething is weird with the Digital Ocean volumesā€).

Yeah we have a big issue here. Trying to fix now.

Everything looks good. Please open a new issue if other problems crop up. We’ll have a post-mortem tomorrow internally and decide how to better scale the infrastructure, since at the time we deployed the failover node, NYC1 was the only region with block storage available. Now that there are more options, we will be moving it shortly to a different region. Thanks for your patience, everyone!