quarkus: dockerBuild for native image doesn't work with remote Docker daemons
The build fails, to try to create a native binary with dockerBuild true and accessing a remote Docker daemon which does not allow bind mounts via docker run -v as it is done in https://github.com/quarkusio/quarkus/blob/fea6ba9709cbfa431706788123b49ef21999fec8/core/creator/src/main/java/io/quarkus/creator/phase/nativeimage/NativeImagePhase.java#L286
This use is important when e.g. building against minikube/minishift’s internal Docker daemon (eval $(minikube docker-env)) so that an image does not need to be pushed to a registry but could be directly used within minikube.
For this setup, in Syndesis we avoided bind mounts but used a combination of running the actual build during a Docker build and then copying out the generated binary from the created image by running a container with cat:
The actual build is like in
cd $operator_dir
docker build -t syndesis-operator-builder . -f Dockerfile-builder
docker run syndesis-operator-builder cat /syndesis-operator > syndesis-operator
chmod a+x syndesis-operator
with that Dockerfile
FROM golang:1.11.0
RUN go get -u github.com/golang/dep/cmd/dep
WORKDIR /go/src/github.com/syndesisio/syndesis/install/operator
COPY Gopkg.toml .
COPY Gopkg.lock .
RUN dep ensure -vendor-only -v
COPY . .
RUN CGO_ENABLED=0 go build -o /syndesis-operator ./cmd/syndesis-operator
This might not be useful in this context as it depends on the size of the source to copy over into the image when doing the build.
As an alternative, we used ssh to copy over the sources into the Minishift VM and then used a bind mount within the VM, but the current solution is (a) more generally applicable and (b) also more robust.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 4
- Comments: 43 (30 by maintainers)
Building native images using a remote docker daemon has been implemented and merged for Quarkus 1.13 (PR #14635). Just use the flag
-Dquarkus.native.remote-container-build=trueinstead of-Dquarkus.native.container-build=true.Understood - but I would say for now you are better of using this manual install as it actually works now and will continue to do so 😉
Better (semi)automatic alignment of quarkus and graalvm native-image is something we’ll work on but it’s a bit out in the future on how this would work.
From the docs I thought that the tarball context could be used to “bundle” the created binary as well.
I’ll hopefully have a look over the weekend. Thanks!
I just came across this issue as well when starting a containerized native image build from within a remote containers development environment in Visual Studio Code. I can get the containerized build to work using
docker cpin combination with a named container via a combination ofdocker createanddocker startinstead of running an anonymous container withdocker runas it is currently done by the native image build step:As far as I could see, this is sufficient in all cases, or did I miss something? It would be fairly straightforward to integrate this alternative way of running a containerized native image build into the native image build step which could be controlled by an application property as already suggested above by @PieterjanDeconinck.
WDYT?
Sorry for being silent for a couple of days. Had too many meetings and monday was also my birthday so there was more family stuff that day.
I will get back to this and try to build a standalone sample project with instructions that can better be used to reproduce and go over this and see where we can improve things for end users.
Was hit by this today as well. Especially for mac osx users then you dont want to do a native build that ends up with a native osx binary which you cant run in k8s.
I think this is really important to have developer joy with quarkus and native builds for k8s. Please prfioritize and work on this.
Well, not really a its about getting access to the created binary. I think, the most elegant way would be to use multistage Docker builds (i.e. running the native build + creating the final image with one ‘docker build’) is the way instead of storing the (linux) binary on your local FS as intermediate step.
Unfortunately Minshift’s Docker daemon is too old to support multistage builds, but if we stick to minikube or more modern Docker daemon that’s by far the best solution (not only for remote Docker daemons but in general).
LOL, although I’m a Spring Boot guy 😛
Ah yes, there is also a relevant Stack Overflow question here.
So the idea is to be able to seemlessly use the docker daemon of Minishift / Minikube to do the build… I would think it’s definitely doable, but I would like more details for @rhuss 😃