verdaccio: Unexpected EOF while installing/downloading large packages

My reason:

Installing larger packages (this one in this scenario) fails consistently from verdaccio with an unexpected EOF. The package is published to the local verdaccio server, not a redirection to the public registry. It installs fine from the public registry at https://registry.npmjs.org.

Even a simple wget ends abruptly

HTTP request sent, awaiting response... 
  HTTP/1.1 200 OK
  X-Powered-By: verdaccio/2.3.4
  Access-Control-Allow-Origin: *
  Content-Type: application/octet-stream
  Content-Length: 137211454
  X-Status-Cat: http://flic.kr/p/aVuVsF
  Date: Mon, 21 Aug 2017 11:43:35 GMT
  Connection: keep-alive
Length: 137211454 (131M) [application/octet-stream]
Saving to: ‘react-native-awesome-card-io-0.6.6.tgz.2’

react-native-awesome-card-io-0.6.6.tgz.2        65%[======================================================================>                                      ]  85.73M  13.6MB/s    in 6.1s    

2017-08-21 17:15:23 (14.1 MB/s) - Connection closed at byte 89897191. Retrying.

Steps to reproduce:

  • Publish the package to Verdaccio
  • Try installing it with yarn or npm

Issue is occurring when running the latest docker image, or on the host system with verdaccio installed through npm. Issue has persisted on both a Linux host and a macOS host.

It seems to occur on an erratic timeout basis (~7-10 seconds) because the download completes successfully if the download speed is high enough. The byte at which the connection closes is also not constant.

App Version:

2.3.4

Config file:

Default config file with only max_body_size increased to 300mb allow publishing packages of this size.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 9
  • Comments: 53 (17 by maintainers)

Most upvoted comments

The issue is related to keep alive configuration for node http server. @vetruvet rate-limit will help to replicate race condition.

Very quick workaround is to disable keep a live functionality at all. Sorry, I haven’t time to investigate original cause. If somebody can dig further, It’ll be appreciated.

src/lib/cli.js: 

157     } else { // http
158       webServer = http.createServer(app);
159     }
160     webServer.keepAliveTimeout=0 // <<< WORK AROUND
161     webServer
162       .listen(addr.port || addr.path, addr.host)

In the fact @philkunz give us the best clue for given issue.

I’ll explain. Problem is that if client connection is too slow, so nginx buffers response from verdaccio until buffer size is reached. That can happen if client connection is slower than connection between nginx and verdaccio. The question is what happens next.

Keep alive connection between nginx and verdaccio (node http server) should not be closed when there are still data to send. But it looks like it is closed. (just assumption)

It’s closed approximately after 5 seconds which corresponds to default setting for keepAliveTimeout at node http server.

server.keepAliveTimeout#
Added in: v8.0.0
<number> Default: 5000 (5 seconds)
See http.Server#keepAliveTimeout.

If my assumption with closed keep alive connection connection between nginx and verdaccio during ongoing data transfer is correct, then would be great to investigate who and why closed connection.

My suspicion is that can be related to uplink storage in verdaccio. In case of big file there is some post processing which doesn’t give event loop tick for transmitting data to client. So given connection reaches keep alive timeout threshold and then node server closes given connection to client. But with still ongoing data download from registry.

This would make a sense, because original author of given logic in verdaccio doesn’t have to meet this problem. Keep alive timeout has been introduced in node 8.0.

Having this issue with the latest docker image (sha256: 122111476bf8e6f397de4babb3cd45390b3d3ecc8bd60ec385d01bafd4ce358f) and with v2.6.6.

Any ideas?

Update @juanpicado: Most of the time it does work on the second try. npm ci outputs:

WARN tarball tarball data for material-design-icons@3.0.1 (sha1-mnHEh0chjrylHlGmbaaCA4zct78=) seems to be corrupted. Trying one more time. It succeeds after that.

Verdaccio log with level trace:

 info <-- 10.220.xxx.xxx requested 'GET /material-design-icons/-/material-design-icons-3.0.1.tgz'
 http <-- 200, user: undefined(10.220.xxx.xxx via 10.220.xxx.xxx), req: 'GET /material-design-icons/-/material-design-icons-3.0.1.tgz', bytes: 0/32990579

Maybe related? https://github.com/npm/registry/issues/202

I am also experiencing this issue. I’ve found that I am able to reproduce it consistently from a clean install on local by rate-limiting curl and attempting to download a relatively large package’s tgz file:

$ git clone git@github.com:verdaccio/verdaccio.git
$ cd verdaccio
$ npm i
$ npx verdaccio

Then in another terminal:

$ while true; do curl 'http://127.0.0.1:4873/picturefill/-/picturefill-3.0.2.tgz' --limit-rate 1M | md5sum; sleep 1; done

Output from curl:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 96 14.5M   96 14.0M    0     0  1023k      0  0:00:14  0:00:14 --:--:-- 1022k
curl: (18) transfer closed with 546327 bytes remaining to read
27104771a7a00310e993bbace951a332  -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 96 14.5M   96 14.0M    0     0  1023k      0  0:00:14  0:00:14 --:--:-- 1025k
curl: (18) transfer closed with 546327 bytes remaining to read
27104771a7a00310e993bbace951a332  -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 96 14.5M   96 14.0M    0     0  1023k      0  0:00:14  0:00:14 --:--:-- 1022k
curl: (18) transfer closed with 546327 bytes remaining to read
27104771a7a00310e993bbace951a332  -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 14.5M  100 14.5M    0     0  1023k      0  0:00:14  0:00:14 --:--:-- 1023k
2d634f05cb3f0f0ad5143be7ddf66924  -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 14.5M  100 14.5M    0     0  1024k      0  0:00:14  0:00:14 --:--:-- 1025k
2d634f05cb3f0f0ad5143be7ddf66924  -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 14.5M  100 14.5M    0     0  1023k      0  0:00:14  0:00:14 --:--:-- 1026k
2d634f05cb3f0f0ad5143be7ddf66924  -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 95 14.5M   95 13.8M    0     0  1023k      0  0:00:14  0:00:13  0:00:01 1027k
curl: (18) transfer closed with 742776 bytes remaining to read
465ef6504058ffa825c8210d60651f2b  -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 92 14.5M   92 13.4M    0     0  1023k      0  0:00:14  0:00:13  0:00:01 1026k
curl: (18) transfer closed with 1201157 bytes remaining to read
90c60889c22766003927b0f54903ab68  -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 14.5M  100 14.5M    0     0  1023k      0  0:00:14  0:00:14 --:--:-- 1026k
2d634f05cb3f0f0ad5143be7ddf66924  -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 97 14.5M   97 14.2M    0     0  1023k      0  0:00:14  0:00:14 --:--:-- 1027k
curl: (18) transfer closed with 349878 bytes remaining to read
611379484ebffff06668801af5aa1c3d  -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 93 14.5M   93 13.6M    0     0  1023k      0  0:00:14  0:00:13  0:00:01 1025k
curl: (18) transfer closed with 1004708 bytes remaining to read
bb6050f5780259b0ca3e0dd352d83e00  -

That file should is 15MB and should have an MD5 of 2d634f05cb3f0f0ad5143be7ddf66924. Without the --rate-limit option, it works perfectly fine.

For context, we have this behind a reverse proxy (tried both nginx and apache) and the same behavior occurs consistently as the proxy runs out of allowed buffer space while streaming over a slow connection.

EDIT: for completeness I’m running node 8.6.0 and NPM 5.5.1 on Ubuntu (kernel 4.13)

Fwiw we weren’t able to identify the actual cause, we were able to work around it however by setting up an nginx reverse proxy to verdaccio on the same host machine and accessing the registry through the proxy.

Thanks for the fix. For anyone not using the official docker images, upgrading to Node 8.9.0+ (or 9.0.0+) will fix the issue.

Putting it behind an nginx proxy did fix the problem for me

I have long periods of no issues then all of a sudden this happens for about two or three days. I have tried both with and without the Nginx proxy. The only solution is to modify the source and manually set the keep alive timeout

i have to agree with @actraiser - i am having the same problem here with 2.7.4 😦

i can reproduce it easily like this (from what i can tell 100% reproduction rate!)

  • take the deep link/url to a package tgz
  • go to chrome and in devtools enable network throttling (so the download takes a bit)
  • even with a localhost loopback (no proxies, no nothing) the transfer will fail if it taks to long 😦

interestingly publishing seems to work reliably also for bigger packages!

Thanks @dbroadhurst for fixing this. Shipped on v2.6.5

I’m not aware of any timeout, but I’ll try to dig a bit and see what I find out. Since I’m not a Docker expert 😥 would be great anyone else dig as well in order to fix this issue.

I’ve reproduced the same issue in this case with geoip-lite which is a fairly large package (~30MiB). It doesn’t work no matter what I try, even after npm cache clean --force. I have verdaccio 2.3 running in an aws hosted Docker container.

It works perfectly with the standard npm registry.

This is the npm log:

1777 http fetch GET 200 https://npm.foo.bar/geoip-lite/-/geoip-lite-1.2.1.tgz 12714ms
1778 silly fetchPackageMetaData error for geoip-lite@~1.2 unexpected end of file
1779 verbose stack Error: unexpected end of file
1779 verbose stack     at Gunzip.zlibOnError (zlib.js:152:15)
1780 verbose cwd /Users/me/work/my-project
1781 verbose Darwin 16.7.0
1782 verbose argv "/Users/me/.nvm/versions/node/v8.5.0/bin/node" "/Users/me/.nvm/versions/node/v8.5.0/bin/npm" "i"
1783 verbose node v8.5.0
1784 verbose npm  v5.3.0
1785 error code Z_BUF_ERROR
1786 error errno -5
1787 error unexpected end of file
1788 verbose exit [ -5, true ]

geoip-lite-1.2.1.tgz is consistently well downloaded in verdaccio, though.

Edit, I’ve also tried it with a simple curl -O https://npm.foo.bar/geoip-lite/-/geoip-lite-1.2.1.tgz and the download ends up abruptly, same as @yogeshwar-20 case.

Alright, thanks for your time! Already tried cleaning cache and fresh installs and there’s nothing obvious that stands out in the local environment, but will dig deeper to try and identify the issue.