distribution: Private registry push reports "blob upload unknown" in client even though data gets pushed correctly

My setup is following:

  • registry from upstream 2.6 image
    • single replica running on kubernetes
    • storage is backed by NFS
  • access to registry is handled by haproxy loadbalancer with https termination (talks to registry through plain http on port 5000)
  • docker on Debian 17.03.0-ce, build 60ccb22

Previously this set-up was working when I didn’t use https, I have the loadbalancer expose the registry as http on port 5000 and I can use it with “localhost:5000” address (on each machine where loadbalancer runs) - in this scenario pushing and using the images works fine.

Now I’m trying to push image to registry through https endpoint: $ docker push images.bigkuber.inside.datax.pl/kskalski-dataflows but I’m getting

65723c989499: Pushing  2.56 kB
0ab03bb42090: Layer already exists
7a44d82d0fdd: Pushing [==================================================>] 1.086 MB/1.086 MB
b447dea7bad2: Pushing [==================================================>] 3.584 kB
a8d49715960c: Pushing [==================================================>] 4.096 kB
07d1c5d3b264: Pushing [==================================================>] 6.656 kB
0ef630c8efc9: Retrying in 5 seconds
910397601113: Waiting
316d0eee43a0: Waiting
c32193acdde5: Waiting
34787f338616: Waiting
35c20f26d188: Waiting
c3fe59dd9556: Waiting
6ed1a81ba5b6: Waiting
a3483ce177ce: Waiting
ce6c8756685b: Waiting
30339f20ced0: Waiting
0eb22bfb707d: Waiting
a2ae92ffcd29: Waiting
blob upload unknown

However even though push fails, the image is actually uploaded correctly, I can get its status and use it by new containers:

$ docker pull images.bigkuber.inside.datax.pl/kskalski-dataflows
Using default tag: latest
latest: Pulling from kskalski-dataflows
Digest: sha256:9cfae8e7c4bda509c5d16acbf2ae66c62fe8acb5709579a6e18f18923bd88635
Status: Image is up to date for images.bigkuber.inside.datax.pl/kskalski-dataflows:latest

I enabled debug on client docker, here is an exempt from relevant time interval:

Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.030831365+02:00" level=debug msg="Pushing layer: sha256:b447dea7bad20f7b52b36d7337a6c83cb30cbad1a29aa8c797c37384b17e4531"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.061256021+02:00" level=debug msg="Pushing layer: sha256:65723c989499242e11b29c1295aa3b547f1467d615c029e00a3cb0923a0c29e6"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.061303530+02:00" level=debug msg="Pushing layer: sha256:a8d49715960cb92ff168733c04cd426fb674b542096411f7d1e1db28d9bcf609"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.061954618+02:00" level=debug msg="Pushing layer: sha256:07d1c5d3b26422751e3c464265eb13148248b70f892ea0d404fd240425e0bb55"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.063894654+02:00" level=debug msg="Pushing layer: sha256:7a44d82d0fdde5954e177b58898ddfeae9c60c7db0c871c20b191ef857bf22ba"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.076830463+02:00" level=debug msg="Assembling tar data for e6893eaa2260964bee7d8bb9da7c4188f48606e0d5fbafba2b55838085917992"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.081559957+02:00" level=error msg="Upload failed: blob upload unknown"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.081840809+02:00" level=debug msg="Pushing layer: sha256:0ef630c8efc92b60505b62e7d8c942defd43bbd9d1ac869f6b48246149e24ab1"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.108774427+02:00" level=debug msg="Assembling tar data for d673eff3ea3a4a9ae6034dd7e922d3b13ea471b7c480342d35f58d0800088a3b"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.110283019+02:00" level=debug msg="Assembling tar data for 6a0340f028c1fe428c324236ff32d551c3edfccc614c18861e99554be127aa6a"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.110364733+02:00" level=debug msg="Assembling tar data for 22e24cc59be1ad28fcc8e58beb3853808e23f546cbdf10c7f2d70b6a6b06f567"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.111267528+02:00" level=debug msg="Assembling tar data for 8380d1e921c9fe78d71a3a6901ede37f356520dfb1d0ebd705e4d0dcbc998802"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.111713175+02:00" level=error msg="Upload failed: blob upload unknown"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.111906935+02:00" level=debug msg="Checking for presence of layer sha256:910397601113062c71d20ce1a50fc3ce0b4573e3eb5e37d9d777cded43961f6b (sha256:2b7dde8c38ea6846165bf7c01af325d72d1d4af8eafe58e7b57d09c0a7b77d3c) in images.bigkuber.inside.datax.pl/kskalski-dataflows"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.112535988+02:00" level=debug msg="Assembling tar data for ddc858a799ed50a534a4fed59ac72d1525c2bb10c3befe51622c87cb124ab19e"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135253097+02:00" level=error msg="Upload failed: blob upload unknown"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135309486+02:00" level=error msg="Upload failed, retrying: blob upload unknown"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135363366+02:00" level=error msg="Upload failed: blob upload unknown"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135545460+02:00" level=debug msg="Pushing layer: sha256:316d0eee43a0f6b0b5bd2145b4b94e8e29607110d32acf255e7dc6e1ae3f6f28"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135670578+02:00" level=debug msg="Checking for presence of layer sha256:c32193acdde589a51f40676750f007a76539cd4cb5500bfd2556c0673c165a90 (sha256:28b75cbefef624f1c57d7a2e061b2e6de129eaf44da264563b209147f19734a1) in images.bigkuber.inside.datax.pl/kskalski-dataflows"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135548087+02:00" level=error msg="Attempting next endpoint for push after error: blob upload unknown"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135839434+02:00" level=debug msg="Skipping v1 endpoint https://images.bigkuber.inside.datax.pl because v2 registry was detected"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.135894206+02:00" level=debug msg="Pushing layer: sha256:34787f33861626ac4a649170b17d4f81e25ab1aeb300d9455f96b8e5402d229f"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.137523719+02:00" level=debug msg="Pushing layer: sha256:35c20f26d18852b74cc90afc4fb1995f1af45537a857eef042a227bd8d0822a3"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.150771828+02:00" level=debug msg="Checking for presence of layer sha256:c3fe59dd955634c3fa1808b8053353f03f4399d9d071be015fdfb98b3e105709 (sha256:81cf5426393a4ac116dac26d8e0f95ea3ba85afcc09bc6eafdbd2efc598aa180) in images.bigkuber.inside.datax.pl/kskalski-dataflows"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.157682152+02:00" level=debug msg="Pushing layer: sha256:6ed1a81ba5b6811a62563b80ea12a405ed442a297574de7440beeafe8512a00a"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.166616697+02:00" level=debug msg="Pushing layer: sha256:a3483ce177ce1278dd26f992b7c0cfe8b8175dd45bc28fee2628ff2cf063604c"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.178014534+02:00" level=debug msg="Assembling tar data for 47fb797a1dcf74556ef32502e03e7b0ac004cee062557980cbc926e06f51bd4b"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.178823622+02:00" level=debug msg="Assembling tar data for c4e08667b54798acaa89006674f1fab79651c4ec635e18d4e1ad6adea129aaf8"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.179815379+02:00" level=debug msg="Pushing layer: sha256:ce6c8756685b2bff514e0b28f78eedb671380084555af2b3833e54bb191b262a"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.180345018+02:00" level=debug msg="Checking for presence of layer sha256:30339f20ced009fc394410ac3360f387351641ed40d6b2a44b0d39098e2e2c40 (sha256:3318dd58ae6084d70d299efb50bcdf63e861f2dc3d787e03a751581e606442d9) in images.bigkuber.inside.datax.pl/kskalski-dataflows"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.188727340+02:00" level=debug msg="Assembling tar data for a928c5db828f3fd176e7eaf494da8b78e2617a32559c790f6af47e9f41e75a5b"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.188856382+02:00" level=debug msg="Assembling tar data for 26deb052b00c3f52d7f83ad2cb741fb489c51329a264ac1b05c8fe779953770d"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.189596714+02:00" level=debug msg="Checking for presence of layer sha256:0eb22bfb707db44a8e5ba46a21b2ac59c83dfa946228f04be511aba313bdc090 (sha256:8d9ed335b7dbe095ecfbbfe0857d07971283db0119f7a4aa490f9cbe06187335) in images.bigkuber.inside.datax.pl/kskalski-dataflows"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.189888533+02:00" level=debug msg="Checking for presence of layer sha256:a2ae92ffcd29f7ededa0320f4a4fd709a723beae9a4e681696874932db7aee2c (sha256:e12c678537aee9a1a1be8197da115e7c4d01f2652344f492a50ca8def9993d1e) in images.bigkuber.inside.datax.pl/kskalski-dataflows"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.191143301+02:00" level=debug msg="Assembling tar data for ad37e77046b99529724637bc995cd2ae9e125ac930063259f39c916ccd8975e1"
Mar 26 06:14:09 bigkuber1 dockerd[10182]: time="2017-03-26T06:14:09.219138614+02:00" level=debug msg="Assembling tar data for 48e2bf3066643a77760f3a53b7ac4bbf137eb1887c4c15ccd78ffd748b2b7f86"

As could be expected, registry server is not reporting any problems:

2017-03-26T04:14:09.400200681Z 10.38.0.0 - - [26/Mar/2017:04:14:09 +0000] "HEAD /v2/kskalski-dataflows/blobs/sha256:8d9ed335b7dbe095ecfbbfe0857d07971283db0119f7a4aa490f9cbe06187335 HTTP/1.1" 200 0 "" "docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" 
2017-03-26T04:14:09.410077768Z time="2017-03-26T04:14:09Z" level=info msg="response completed" go.version=go1.7.3 http.request.host=images.bigkuber.inside.datax.pl http.request.id=e726db90-591c-465b-9a5d-ecdf980ae24c http.request.method=POST http.request.remoteaddr=192.168.168.114 http.request.uri="/v2/kskalski-dataflows/blobs/uploads/" http.request.useragent="docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" http.response.duration=19.552371ms http.response.status=202 http.response.written=0 instance.id=b985b0b1-c857-474d-b112-ad6ede7c62c9 version=v2.6.0  
2017-03-26T04:14:09.410104571Z 10.38.0.0 - - [26/Mar/2017:04:14:09 +0000] "POST /v2/kskalski-dataflows/blobs/uploads/ HTTP/1.1" 202 0 "" "docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" 
2017-03-26T04:26:22.159163623Z time="2017-03-26T04:26:22Z" level=info msg="response completed" go.version=go1.7.3 http.request.host=images.bigkuber.inside.datax.pl http.request.id=51dbde7d-8785-446b-8403-fb47c684b0eb http.request.method=GET http.request.remoteaddr=192.168.168.114 http.request.uri="/v2/" http.request.useragent="docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=2.710846ms http.response.status=200 http.response.written=2 instance.id=b985b0b1-c857-474d-b112-ad6ede7c62c9 version=v2.6.0  
2017-03-26T04:26:22.159399084Z 10.38.0.0 - - [26/Mar/2017:04:26:22 +0000] "GET /v2/ HTTP/1.1" 200 2 "" "docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" 
2017-03-26T04:26:22.188904478Z time="2017-03-26T04:26:22Z" level=info msg="response completed" go.version=go1.7.3 http.request.host=images.bigkuber.inside.datax.pl http.request.id=373d6cbd-94b9-40c2-aa45-25c8aa3ad944 http.request.method=GET http.request.remoteaddr=192.168.168.114 http.request.uri="/v2/kskalski-dataflows/manifests/latest" http.request.useragent="docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" http.response.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.response.duration=18.19002ms http.response.status=200 http.response.written=4304 instance.id=b985b0b1-c857-474d-b112-ad6ede7c62c9 version=v2.6.0  
2017-03-26T04:26:22.188938918Z 10.38.0.0 - - [26/Mar/2017:04:26:22 +0000] "GET /v2/kskalski-dataflows/manifests/latest HTTP/1.1" 200 4304 "" "docker/17.03.0-ce go/go1.7.5 git-commit/60ccb22 kernel/3.16.0-4-amd64 os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.0-ce \\(linux\\))" 

I suspect something is configured incorrectly in loadbalancer doing https termination, since going through localhost:5000 works well. There must be some specific interaction of client and server that happens only during push, but not in any other operation, such that loadbalancer’s https serving causes problems.

About this issue

  • Original URL
  • State: open
  • Created 7 years ago
  • Reactions: 22
  • Comments: 29

Commits related to this issue

Most upvoted comments

@Nick-Harvey I think you should modify proxy_set_header X-Forwarded-Proto $scheme to proxy_set_header X-Forwarded-Proto https

Try adding http-request set-header X-Forwarded-Proto https if { ssl_fc } to your backend configuration in your HAProxy configuration.

I had the same problem with Nginx reverse-proxy behind Amazon ELB (doing ssl termination). Forcing the protocol to https fixes the problem

In case anyone lands on this with an Apache reverse proxy, here’s what fixed the issue on my setup:

RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
RequestHeader set "X-Forwarded-SSL" expr=%{HTTPS}

I also faced the same issue as @ritarya mentioned. When I run private repo with multiple replicas, docker image push was keep retrying and failing. When I reduced # of replicas to 1, issue immediately went away…

I am also facing the issue behind the nginx ingress. Push works fine on single replica registry but when i increase the number of replica, it gives the unknown blob upload message after waiting and push. My setup is external docker client -> nginx-ingress -> registry ( 2 replica).

@AndreaGiardini you just saved my life! I spent a whole morning on this issue (with the exact same configuration… ELB -> Nginx -> Registry)! ❤️

[Opened a new issue #2862 and moved my comments there because it seems like this might be common to several open issues.]

@dignajar if you contact JFrog support, we’ll be happy to work with you on this one. That said, this issue feels like a hokey response from the client to a network layer glitch. Seems like if there’s an issue other than network layer config, its client-side not server-side (and obviously an edge case, and possibly just bad feedback). The NGINX config you provided looks like the default one we generate (although I haven’t checked it line by line) and generally that works (if its not the default NGINX we generate, use the auto-generated one), but there can be additional network-layer issues that can complicate it depending on your specific configuration.