official-images: "no supported platform found in manifest list" / "no matching manifest for XXX in the manifest list entries"
TLDR: Not all architectures are created equal, but perhaps even more importantly, not all build servers we have access to are equal in performance, power, or ability to process builds reliably.
Important: Please do not post here with reports of individual image issues – we’re aware of the overall problem, and this issue is a discussion of solving it generally. Off-topic comments will be deleted.
When we merge an update PR to https://github.com/docker-library/official-images, it triggers Jenkins build jobs over in https://doi-janky.infosiftr.net/job/multiarch/ (see https://github.com/docker-library/official-images/issues/2289 for more details on our multiarch approach).
Sometimes, we’ll have non-amd64
image build jobs finish before their amd64
counterparts, and due to the way we push the manifest list objects to the library
namespace on the Docker Hub, that results in amd64
-using folks (our primary target users) getting errors of the form “no supported platform found in manifest list” or “no matching manifest for XXX in the manifest list entries” (see linked issues below for several reports from users of this variety).
Thus, manifest lists under the library
are “eventually consistent” – once all arches complete successfully, the manifest lists get updated to include all the relevant sub-architectures.
Our current method for combating the main facet of this problem (missing amd64
images while other arches are successfully built and available) is to trigger amd64
build jobs within an hour after the update PR is merged, and all other arches only within 24 hours. This helps to some degree in ensuring that amd64
builds first, but not always. For example, our arm32vN
servers are significantly faster than our AWS-based amd64
server, so if those jobs happen to get queued at the same time as existing amd64
jobs are, they’ll usually finish a lot more quickly. Additionally, given the slow IO speed of our AWS-based amd64
build server, the queue for amd64
build jobs piles up really quickly (which also doesn’t help with keeping our build window low).
As for triggering jobs more directly, the GitHub webhooks support in Jenkins makes certain assumptions about how jobs and pipelines are structured/triggered, and thus we can’t use GitHub’s webhooks to effectively trigger these jobs (without doing additional custom development to sit between the two systems), and thus rely on the built-in Jenkins polling mechanism. This has been fine (we haven’t noticed any scalability issues with how often we’re polling), and even if we were triggering builds more aggressively, that’s only half the problem (since then our build queues would just pile up faster).
One solution that has been proposed is to wait until all architectures successfully build before publishing the relevant manifest list. If a naïve version of this suggestion were implemented right now, we would have no image updates published because our s390x
worker is currently down (as an example – we do frequently lose builder nodes given that all non-amd64
arches are using donated resources). Additionally, as noted above, some architectures build significantly slower than others (before we got our hyper-fast ARM hardware, arm32vN
used to take days to build images like python
), so it isn’t exactly fair to force all architectures to wait for the one slowpoke before providing updated images to our userbase. As a final thought on this solution, some architectures outright fail, and the maintainers don’t necessarily notice or even care (for example, mongo:3.6
on windows-amd64
has been failing consistently with a mysterious Windows+Docker graph driver error that we haven’t had a chance to look into or escalate, and wouldn’t be fair to block updated image availability on).
One compromise would be to use the Jenkins Node API (https://doi-janky.infosiftr.net/computer/multiarch-s390x/api/json) to determine whether a particular builder is down in order to determine whether to block on builds of that architecture. Additionally, we could try to get creative with checking pending builds / queue length for a particular architecture’s builds to determine whether or not a given architecture is significantly backlogged and thus a good candidate for not waiting.
We could also attempt to determine when a particular tag was added/merged, and set a time limit for some number of hours before we just assume it must be backlogged, failing, or down and move along without that tag, but this is slightly more complicated (since we don’t have a modification time for a particular tag directly, and really can only determine that information on an image level without complex Git walking / image manifest file parsing). Perhaps even just a time limit on the image level would be enough, but in the case of our mongo:3.6
example, that would mean all tag updates to mongo
(whether they’re related to the 3.6 series or not) would wait the maximum amount of time before being updated due to one version+architecture combination failing.
Related issues: (non-comprehensive)
- https://github.com/c0b/docker-erlang-otp/issues/75
- https://github.com/docker-library/ghost/issues/106
- https://github.com/docker-library/hello-world/issues/39
- https://github.com/docker-library/official-images/issues/3467
- https://github.com/docker-library/official-images/issues/3472
- https://github.com/docker-library/official-images/issues/3653
- https://github.com/docker-library/official-images/pull/3587#issuecomment-337001965
- https://github.com/docker-library/official-images/pull/3622#issuecomment-339838650
- https://github.com/docker-library/python/issues/252
- https://github.com/docker-library/rabbitmq/issues/188
- https://github.com/docker-library/rabbitmq/issues/217
- https://github.com/docker-library/rabbitmq/issues/230
- https://github.com/docker-library/redis/issues/115
- https://github.com/docker-library/ruby/issues/159
- https://github.com/docker-library/wordpress/issues/239
- https://github.com/nodejs/docker-node/issues/571
- https://github.com/piwik/docker-piwik/issues/70
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 27
- Comments: 18 (7 by maintainers)
Links to this issue
Commits related to this issue
- Updated nginx. - mainline nginx is bumped to 1.13.9 - alpine mainline builds are now using alpine 3.7. — committed to docker-library/official-images by thresheek 6 years ago
- infra: Force a given docker architecture. See https://github.com/docker-library/official-images/issues/3835 Signed-off-by: Tim 'mithro' Ansell <me@mith.ro> — committed to f4pga/prjxray by mithro 6 years ago
- [Docker] Solve bug related to node:8 manifest For more information: https://github.com/docker-library/official-images/issues/3835 — committed to ejplatform/ej-front by altjohndev 6 years ago
- [Docker] Solve bug related to node:8 manifest For more information: https://github.com/docker-library/official-images/issues/3835 — committed to ejplatform/ej-front by altjohndev 6 years ago
- [Docker] Solve bug related to node:8 manifest For more information: https://github.com/docker-library/official-images/issues/3835 — committed to ejplatform/ej-front by altjohndev 6 years ago
- Declaratively specify that we would like to use amd64 node images While I extol the virtues of multiarch builds, our app primarily targets amd64 devices, and I'm pretty sure we can only build on amd6... — committed to hybsearch/hybsearch by rye 6 years ago
- [docker] Pin golang image to specific version (1.10.0) This is to avoid this issue when a rebuild is underway: https://github.com/docker-library/official-images/issues/3835 — committed to opsline/swizzle by opsline-chetan 6 years ago
- Remove https://github.com/docker-library/official-images/issues/3835 caveat 🎉 — committed to docker-library/faq by tianon 5 years ago
This issue is causing fairly regular and difficult to diagnose build failures for end users. It has been open here for over 3 months. What is the timeline to solving it?
As noted in the commit message on https://github.com/docker-library/oi-janky-groovy/commit/51e9901a4f387d93f11cc59732534835b466dc3b, this can finally be closed thanks to https://github.com/docker-library/official-images/pull/5897!! 🎉 🤘 ❤️
That’s fully implemented and working on our infrastructure now, which can be seen right this second with the recent
alpine:3.9
update that addedalpine:3.9.4
(https://github.com/docker-library/official-images/pull/5898), andalpine:3.9
’samd64
entry is updated to point to the new3.9.4
image but all the other architecture entries still point to3.9.3
(that is, until they finally build and catch up, which should be triggering within the next hour 💪), andalpine:3.9.4
is available andalpine:3.9.3
is now “archived” and will remain untouched. 👍Alpine Digest Comparisons:
Or, more clearly: 🎉
I once more may repeat the question from @nategraf
What is the timeline to solving it?
@MattF-NSIDC fair point – this was intended as a tracking issue for the problem and discussion around how to solve the crux of it properly; I think a short blurb here about how to work around it in the meantime is definitely appropriate. Here’s my current recommendation:
If you rely on a specific image, use https://github.com/docker-library/repo-info (linked from every image description) to find the exact
sha256
digest (also available from thedocker pull
output, but therepo-info
repository has retroactive digests in the Git history), or even simply use a more specific tag.If you’re looking for a specific architecture, use the architecture-specific namespace to find it (as linked from both https://github.com/docker-library/official-images#architectures-other-than-amd64 and every image description under “Supported architectures”).
As for ETA, even if we find a reasonable solution to “wait for things to be available”, we’ll still have a limit on how long we wait before pushing whatever we’ve got, which will likely be on the order of hours but still less than 24 (because IMO, 24h ought to be the absolute maximum we wait before we storm ahead, and should roughly match our current builds-scheduling timing).
I’ve been running into this issue in my work recently (with openjdk to be specific) and finally found this thread.
All of these solutions seem to me a bandaid on a larger issue. Why is it that each architectures image/tag can’t be published independently? i.e. Why is it that you must publish the full manifest at once, and if amd64 or arm32 or ppc64 aren’t ready yet that all of your users calling
docker pull
now receive an error? It seems like the expected behavior would be to get the last-good build for your requested image/tag/os/arch regardless of if the pipeline of fresh builds is currently broken or slow.All of the solutions in this thread seem to leave trust in pulling from the DockerHub registry fatally compromised. i.e. If I am running an infrastructure that trusts DockerHub to serve up the images to any new machine I spin up that needs them, I need a very high level of confidence that this will always work. If a build pipeline breaks or is extremely slow (as will surely happen sometimes) I will be served an “image not found” error instead of the latest-good image. None of the solutions here seem to provide that kind of guarantee, and it seems like there is a bigger problem behind the scenes. Requesting the SHA directly is a work-around, but seems far from ideal.
I’m probably missing some key facts, so I’d love to have someone correct me if I’m off-base. 🙂
What advice do you have for developers affected by this bug? I’m using an image which depends on docker-library/tomcat and I’ve been unable to build for about a half hour. I read your post pretty carefully, I think, but didn’t see any mention of a workaround. Based on what I’ve read, this is not a problem that can be solved on my side, I would just have to wait.
If that is the case, is there any way for me to do maybe an API query to estimate a wait time until dockerhub reaches a consistent state?
For https://github.com/docker-library/official-images/issues/4789 (which I’m guessing is the reason you’ve landed on this thread), we expect to solve the outage today.
For the more general problem, we need to develop a solution and implement it, so far neither of which have turned out to be easy to do for the reasons outlined in detail above.
As a workaround, pulling images with the
amd64/
prefix should still be functional.Same issue here, our Gitlab Kubernetes Build runner is refusing to pull the image.
I think there should be an advise somewhere to pin to a certain SHA. Or is there already something out there?