drone-docker: Drone doesn't seem to cache layers
I’m using this setup but i still get no cache performance increase. I do however see that is loading and saving however it doesn’t seem to use the cache layers. Any idea? A normal build does use the cache layer.
Is there a specific reason why we just don’t mount the docker socket in the container and use the existing docker for building?
build:
image: alpine:3.2
publish:
docker:
registry: XX
username: XX
password: $$REGISTRY_PASSWORD
email: kevin.bacon@mail.com
repo: XX
load: docker/image.tar
save:
destination: docker/image.tar
tag: latest
notify:
slack:
webhook_url: XX
channel: update
username: drone
cache:
mount:
- docker/image.tar
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 27 (5 by maintainers)
@bradrydzewski @leshik
I might be missing something, but the reason despite loading a cached tar as an image isn’t working is due to
pull=truealways being part of the build command. If I knew any go I would create a pull request, which accepted no_pull or something like that to the drone-docker yml args, unfortunately I don’t./usr/bin/docker build --pull=true --rm=true -f Dockerfile.build -t org/project:dev .Help? 😄
Ok so this is what i needed to get this puppy running:
Not very hard as you can see 😃
In the newest syntax, something like the following worked for me:
@emilebosch, @chuyskywalker use official Docker in Docker image. It is based on Alpine linux.
@chuyskywalker there is an open PR to improve this at https://github.com/drone-plugins/drone-docker/pull/36
The current solution works great for certain languages, such as Go, Rust and others that compile binaries and don’t need layer caching. Drone is optimized for these language because I spend most of my time writing Go code, and that bias is reflected in the initial design. I fully acknowledge for other languages (python, php, ruby) we still have a lot of work to do.
Note that I described this as the initial design. Plugins were just introduced in this latest version of Drone and this is by no means the final implementation. If you have suggestions to improve we are definitely open. Also remember that you can write your own plugins, incubate them, and suggest they be included in the official plugin list.
Lastly there are situations, even for private installations, that you don’t want the host machines Docker daemon exposed. Some users of Drone operate in highly regulated environments and want their build servers locked down. Some teams run 10+ builds per server and have run into race conditions (tagging and pushing images) when running multiple builds concurrently for the same repository.
We just need to take the above into consideration as we move forward. There are a lot of different use cases we need to consider. And, the good news is, we can have multiple docker plugins for different use cases if we need to optimize for specific languages or workflows.
I also hit this same issue where the docker daemon (in the build container) simply refuses to use the imported cached layers. I also went the route through host socket mounting. I totally understand that, for publicly hosted instance (drone.io, etc) this is a non-starter for a lot of reasons, but for privately run drones this feels a lot saner too me. Better cached usage, less “weird” of a setup. It’d be much appreciate if the docker plugin could support this, but the workaround above leads down a working path.
Don’t forget to mark the repo as “trusted” in drone so volume mounts are allowed. Otherwise, I ended up doing this:
So that user/pass/email don’t leak through the build log.
@lewistaylor I came across the exact same issue, and I’ve made a PR which allows you to now set it as an option, if you want to try it out!
@budjizu What you described is not “Docker in Docker”: it’s using the docker cli command to “call out” to a daemon back on the host. This is, essentially, exactly what I and others have done.
“Docker in Docker”, which Drone does by default, is actually running a docker daemon inside your container. That new daemon is where we’re having trouble getting layers to be cached. This is an issue for us because, without layer caching, many docker build flows are not tenable. (As to why layer caching doesn’t work with the
load/savetrick documented elsewhere, I don’t know…)