quarkus: dockerBuild for native image doesn't work with remote Docker daemons

The build fails, to try to create a native binary with dockerBuild true and accessing a remote Docker daemon which does not allow bind mounts via docker run -v as it is done in https://github.com/quarkusio/quarkus/blob/fea6ba9709cbfa431706788123b49ef21999fec8/core/creator/src/main/java/io/quarkus/creator/phase/nativeimage/NativeImagePhase.java#L286

This use is important when e.g. building against minikube/minishift’s internal Docker daemon (eval $(minikube docker-env)) so that an image does not need to be pushed to a registry but could be directly used within minikube.

For this setup, in Syndesis we avoided bind mounts but used a combination of running the actual build during a Docker build and then copying out the generated binary from the created image by running a container with cat:

The actual build is like in

cd $operator_dir
docker build -t syndesis-operator-builder . -f Dockerfile-builder
docker run syndesis-operator-builder cat /syndesis-operator > syndesis-operator
chmod a+x syndesis-operator

with that Dockerfile

FROM golang:1.11.0
RUN go get -u github.com/golang/dep/cmd/dep
WORKDIR /go/src/github.com/syndesisio/syndesis/install/operator
COPY Gopkg.toml .
COPY Gopkg.lock .
RUN dep ensure -vendor-only -v
COPY . .
RUN CGO_ENABLED=0 go build -o /syndesis-operator ./cmd/syndesis-operator

This might not be useful in this context as it depends on the size of the source to copy over into the image when doing the build.

As an alternative, we used ssh to copy over the sources into the Minishift VM and then used a bind mount within the VM, but the current solution is (a) more generally applicable and (b) also more robust.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 4
  • Comments: 43 (30 by maintainers)

Commits related to this issue

Most upvoted comments

Building native images using a remote docker daemon has been implemented and merged for Quarkus 1.13 (PR #14635). Just use the flag -Dquarkus.native.remote-container-build=true instead of -Dquarkus.native.container-build=true.

Understood - but I would say for now you are better of using this manual install as it actually works now and will continue to do so 😉

Better (semi)automatic alignment of quarkus and graalvm native-image is something we’ll work on but it’s a bit out in the future on how this would work.

Well, not really a its about getting access to the created binary. I think, the most elegant way would be to use multistage Docker builds (i.e. running the native build + creating the final image with one ‘docker build’) is the way instead of storing the (linux) binary on your local FS as intermediate step.

From the docs I thought that the tarball context could be used to “bundle” the created binary as well.

Unfortunately Minshift’s Docker daemon is too old to support multistage builds, but if we stick to minikube or more modern Docker daemon that’s by far the best solution (not only for remote Docker daemons but in general).

I’ll hopefully have a look over the weekend. Thanks!

I just came across this issue as well when starting a containerized native image build from within a remote containers development environment in Visual Studio Code. I can get the containerized build to work using docker cp in combination with a named container via a combination of docker create and docker start instead of running an anonymous container with docker run as it is currently done by the native image build step:

# create the native image build container but don't start it yet (all the same arguments as for docker run except for removing the volume mount and additionally specifying a container name)
docker create --name native-image-container --env LANG=C --user 0:0 quay.io/quarkus/ubi-quarkus-native-image:20.1.0-java11 <ALL_THE_FANCY_ARGS> quarkus-app-1.0-SNAPSHOT-runner

# copy the native image build sources to /project in the native image build container instead of mounting the volume (this creates an anonymous volume containing the native image build sources and mounts it to /project)
docker cp build/quarkus-app-1.0-SNAPSHOT-native-image-source-jar/. native-image-container:/project

# start the native image build by starting the prepared container, attach to it in order to get the output and wait until the build is finished
docker start --attach native-image-container

# copy the native image back from the build container into the native image build sources folder
docker cp native-image-container:/project/quarkus-app-1.0-SNAPSHOT-runner build

# remove the native image container and its associated anonymous volume
docker container rm --volumes native-image-container

As far as I could see, this is sufficient in all cases, or did I miss something? It would be fairly straightforward to integrate this alternative way of running a containerized native image build into the native image build step which could be controlled by an application property as already suggested above by @PieterjanDeconinck.

WDYT?

Sorry for being silent for a couple of days. Had too many meetings and monday was also my birthday so there was more family stuff that day.

I will get back to this and try to build a standalone sample project with instructions that can better be used to reproduce and go over this and see where we can improve things for end users.

Was hit by this today as well. Especially for mac osx users then you dont want to do a native build that ends up with a native osx binary which you cant run in k8s.

I think this is really important to have developer joy with quarkus and native builds for k8s. Please prfioritize and work on this.

Well, not really a its about getting access to the created binary. I think, the most elegant way would be to use multistage Docker builds (i.e. running the native build + creating the final image with one ‘docker build’) is the way instead of storing the (linux) binary on your local FS as intermediate step.

Unfortunately Minshift’s Docker daemon is too old to support multistage builds, but if we stick to minikube or more modern Docker daemon that’s by far the best solution (not only for remote Docker daemons but in general).

@geoand : Thanks for super fast response, (well I think it’s a pre-requisite for being a quarkus guy 😉 )

LOL, although I’m a Spring Boot guy 😛

we have added a quarkus generator and quarkus health check enricher in fabric8io/fabric8-maven-plugin#1577 , so I was just testing this feature but I faced issues when during native build as it requires local docker deamon to be exposed during native build.

If I build usually with minikube’s docker deamon exposed. it fails with this error: https://pastebin.com/QYc3PViX

Ah yes, there is also a relevant Stack Overflow question here.

So the idea is to be able to seemlessly use the docker daemon of Minishift / Minikube to do the build… I would think it’s definitely doable, but I would like more details for @rhuss 😃