compose: Referencing local image name fails on "docker compose build"
Description
I’m getting an error from docker compose build
with a setup that is consistently successful with docker-compose build
.
I have a docker-compose.yml
file with two services: base
and extended
. base.image
gives a name to the image built by base
, and I’d like to use that name as the FROM
image in the Dockerfile for the extended
service.
This works well with docker-compose build
. It does not work with docker compose build
.
Steps to reproduce the issue:
- Re-create my
docker/
folder with these three files, plus an emptyREADME.md
.cd
intodocker/
.
# docker/docker-compose.yml
services:
base:
image: neilyio/base
build:
context: .
dockerfile: base.Dockerfile
extended:
build:
context: .
dockerfile: extended.Dockerfile
# docker/base.Dockerfile
FROM scratch
COPY README.md /root/README.md
# docker/extended.Dockerfile
FROM neilyio/base
CMD cat /root/README.md
docker-compose build
, expect a successful run.- Clear your cache and delete these new images so we have a clean comparison for the next step. I used these commands:
docker image rm neilyio/base docker_extended
docker system prune -f
docker compose build
, expect a failure.
Describe the results you received:
docker compose build
produces:
[+] Building 0.6s (8/8) FINISHED
=> [docker_extended internal] load build definition from extended.Dockerfile 0.0s
=> => transferring dockerfile: 88B 0.0s
=> [neilyio/base internal] load build definition from base.Dockerfile 0.0s
=> => transferring dockerfile: 86B 0.0s
=> [docker_extended internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [neilyio/base internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [docker_extended internal] load metadata for docker.io/neilyio/base:latest 0.4s
=> [neilyio/base internal] load build context 0.0s
=> => transferring context: 3.65kB 0.0s
=> [neilyio/base 1/1] COPY README.md /root/README.md 0.0s
=> [neilyio/base] exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:e29ad2347eed9046148ed435ca66984c9421c1d39ae9f40004a62658e60640c3 0.0s
=> => naming to docker.io/neilyio/base 0.0s
------
> [docker_extended internal] load metadata for docker.io/neilyio/base:latest:
------
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to create LLB definition: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Describe the results you expected:
I expected docker compose build
to find the locally-built neilyio/base
image, instead it seems to try and load from docker.io/neilyio/base:latest
. I expected docker compose build
to have the same behaviour as docker-compose build
, which successfully found the local image.
Additional information you deem important (e.g. issue happens only occasionally):
This can be a little tricky to reproduce because of Docker’s caching. docker compose build
will work fine if neilyio/base
is already built. docker compose build
will successfully find the local image, so it can give the impression that it’s working. My step 3 above, clearing the cache, is important to accurately reproduce this. I found I needed to do both a system prune
and image rm
for this.
docker-compose build
works every time, whether or not neilyio/base
has been built before.
Output of docker version
:
Client:
Cloud integration: 1.0.14
Version: 20.10.6
API version: 1.41
Go version: go1.16.3
Git commit: 370c289
Built: Fri Apr 9 22:46:57 2021
OS/Arch: darwin/arm64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.6
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 8728dd2
Built: Fri Apr 9 22:44:13 2021
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.4.4
GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc:
Version: 1.0.0-rc93
GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
docker-init:
Version: 0.19.0
GitCommit: de40ad0
Output of docker context show
:
default
Output of docker info
:
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
compose: Docker Compose (Docker Inc., 2.0.0-beta.1)
scan: Docker Scan (Docker Inc., v0.8.0)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 2
Server Version: 20.10.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 5.10.25-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: aarch64
CPUs: 4
Total Memory: 1.942GiB
Name: docker-desktop
ID: OP3D:IHZS:FQCX:56ZP:HNOA:X4KO:2EF2:AOY2:URIC:5GF6:LUHX:Z7QD
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Additional environment details (AWS ECS, Azure ACI, local, etc.):
Local run on M1 Macbook Air.
About this issue
- Original URL
- State: open
- Created 3 years ago
- Reactions: 13
- Comments: 28 (1 by maintainers)
Not sure if this valid or not, but I have same issue and I solve it by using this 2 commands :
After i run that I re-run docker-compose up/build and its solved.
@ndeloof It seems like the problem arises when the dependent service is using the
image
property rather than abuild
section.Dockerfile
docker-compose.ml
Error
Full Gist
The service
sub-with-build
waits forbase-image
to be built and then runs as expected butsub-with-image
fails. Is this the expected outcome?To better cover this scenario, it seems to be we should define a new depends_on condition dedicated to build requirements, i.e.
I’ll experiment with this approach and prepare a proposal on https://github.com/compose-spec/compose-spec
@thaJeztah thank you for the detailed response. From what I’ve been reading online, it was my conclusion too.
My situation is tad more dependency chain problematic, where I’m building a
base
and using that local image in 2 other local images (e.g.foo
andbar
) which are usingFROM base
. I suppose adopting your approach would mean quite some changes to the file system structure (all images are built fromroot/base
,root/foo
,root/bar
, etc. currently). Also, when I looked at this earlier, it lead me tocache-from
andtarget
indocker-compose.yml
and I was really hoping to avoid all that.Any idea on ETA / priority for this issue?
Thanks again.
Workarounds depend a bit on your exact use-case;
You can run the builds for each service manually to make sure to build the base image first (
docker compose build base
), but this depends on what “builder” you use; as it won’t work if you use a remote or “container” builder (such builders store build-cache, but not images).The other workaround (this would usually be the recommended approach) is to use a multi-stage build. However, this assumes the situation as outlined in this ticket’s description where both images share the same build-context.
Rewriting the example to have both services use the same Dockerfile, but a different
target
(stage). The second (extended
) stage depends on the first (base
) stage, which means that buildingextended
will also build thebase
stage.It’s worth noting that;
When building only
extended
, the layer(s) forbase
will be built, but no image (neilyio/base
) is tagged for thebase
image.Generally, building
extended
, then buildingbase
should produce the same image (common layers shared between both images), but there may be some improvements to be made in compose here; if both builds would run in parallel, they’re not guaranteed to produce the same layer-digests (if changes are made in between within the build-context).