sregistry: Connection timeout pulling big containers
Hi @vsoch,
Sorry again for bringing this nginx errors … but I’m not very experienced with this web services and I think this could be easy for you. 😛
I’m trying to pull a 2.3Gb image and I always get the following error:
nginx_1 | 2018/01/18 16:46:59 [error] 6#6: *3 upstream timed out (110: Connection timed out) while reading response header from upstream, client: XXXXXX, server: localhost, request: "GET /containers/8/download/3c548175-fd5c-4033-b667-be836f0c7f4b HTTP/1.1", upstream: "uwsgi://172.17.0.4:3031", host: "XXXXXXX2
The connections are always closed after 60s.
This is the current nginx.conf file
server {
listen *:80;
server_name localhost;
client_max_body_size 8000M;
client_body_buffer_size 2000M;
client_body_timeout 900;
send_timeout 900;
add_header X-Clacks-Overhead "GNU Terry Pratchett";
add_header X-Clacks-Overhead "GNU Terry Pratchet";
add_header Access-Control-Allow-Origin *;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Authorization,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
location /images {
alias /var/www/images;
}
location / {
include /etc/nginx/uwsgi_params.par;
uwsgi_pass uwsgi:3031;
}
location /static {
alias /var/www/static;
}
}
server {
listen 443;
server_name localhost;
root html;
client_max_body_size 8000M;
client_body_buffer_size 2000M;
client_body_timeout 900;
send_timeout 900;
add_header X-Clacks-Overhead "GNU Terry Pratchett";
add_header X-Clacks-Overhead "GNU Terry Pratchet";
add_header Access-Control-Allow-Origin *;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Authorization,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
ssl on;
ssl_certificate XXXXXXXXXXXXX.crt;
ssl_certificate_key XXXXXXXXXXX.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA;
ssl_session_cache shared:SSL:50m;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_prefer_server_ciphers on;
location /images {
alias /var/www/images;
}
location /static {
alias /var/www/static;
}
location / {
include /etc/nginx/uwsgi_params.par;
uwsgi_pass uwsgi:3031;
}
}
Thanks in advance!
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 18 (18 by maintainers)
Commits related to this issue
- Merge pull request #90 from dctrud/download_streaming Address #82 large container pulls with download streaming — committed to singularityhub/sregistry by vsoch 6 years ago
Hi all - I don’t think there’s anything that’s useful to compare from my logs - it just works. 200 OK codes all round, so I did a bit more digging…
@victorsndvg - Looking back through the above, the logs you have show that uwsgi generated very little output in terms of bytes during the relatively long time the request ran. We can probably put in massive timeouts to make it work, but that’s not optimal, and you are likely to hit problems when download size goes close to, or larger than RAM…
I took a look into the sregistry download code and noted that the container download is being sent out via plain HttpResponse. This results in the file being read into RAM in its entirety before anything gets sent out - and from your logs it’s timing out during this process, before any data is being sent. There are 2 better ways to handle large files in Django:
Use StreamingHttpResponse and the basehttp FileWrapper - this will mean the uwsgi app doesn’t read the entire file into RAM - it streams it off disk straight to nginx, then out down the http connection.
Use xsendfile (django-sendfile has a good implementation supporting nginx) which offloads the serving of the download to Nginx - which will stream it out directly.
I’ve done both of these in systems at work that need to serve files that could be in the 100s of GBs without issue.
@vsoch I can try and to a PR to address this on the weekend.