buildah: Podman build is very slow compared to docker
Description
I’ve previously experienced slowness using podman build with images that have labels (https://github.com/containers/buildah/issues/1764). But now it seems building any image is very slow.
In case it’s relevant, I’m also experiencing an image not known error that could be affecting buildah: https://github.com/containers/libpod/issues/3982
Normally I’d wipe ~/.local/share/containers/storage and compare the results with a clean setup but I didn’t want to do that this time in case there’s something in there that’s needed for the other issue.
Steps to reproduce the issue:
Here’s my setup:
echo 'FROM registry.access.redhat.com/rhoar-nodejs/nodejs-8
WORKDIR /usr/src/app
USER root
RUN chgrp -R 0 . && \
chmod -R g=u .
USER 1001
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]' > Dockerfile
echo "{}" > package-lock.json
echo '{"lockfileVersion": 1}' > package-lock.json
$ time docker build --no-cache -q .
sha256:87580070c28511b922f556de0924ec92b8867fe6183149ebf19b4a1bdcdaa34c
real 0m14.917s
user 0m0.031s
sys 0m0.021s
$ time docker build -q .
sha256:87580070c28511b922f556de0924ec92b8867fe6183149ebf19b4a1bdcdaa34c
real 0m0.147s
user 0m0.027s
sys 0m0.020s
$ time podman build --no-cache -q .
02b95eb8d13e124c46cacea3fe59a66da70cb81a3ec91e80be2ddc9cbf5eac0d
1366916962926f8d7f92ab183f8ba567af7855703fcc4b068affd0112f3fcc6c
9c1929bc809d14e6db188ccbde91c2d9420c4792813b944b37f29dc0a209c235
9b393f6380618ab2fbcc5c4ce1315b46b55a656dd23ece360c8cf1360a902917
53fa68f07a2bbf1b287a594a5199ee4c04d65915cb36e68018321bf339fd4dd9
npm info it worked if it ends with ok
npm info using npm@6.4.1
npm info using node@v8.16.1
npm info prepare initializing installer
npm info prepare Done in 0.023s
npm info extractTree Done in 0.003s
npm info updateJson Done in 0.001s
npm info lifecycle undefined@undefined~preinstall: undefined@undefined
npm info lifecycle undefined@undefined~install: undefined@undefined
npm info lifecycle undefined@undefined~postinstall: undefined@undefined
npm info buildTree Done in 0.002s
npm info garbageCollect Done in 0s
npm info lifecycle undefined@undefined~prepublish: undefined@undefined
npm info runScript Done in 0.001s
npm info lifecycle undefined@undefined~prepare: undefined@undefined
npm info runScript Done in 0s
npm info teardown Done in 0s
npm info run-scripts total script time: 0.003s
npm info run-time total run time: 0.032s
added 0 packages in 0.032s
npm timing npm Completed in 649ms
npm info ok
4c72ca9afede91c9b0e90b0ee33bd888455903281e7c01aed2ec4dd53f581122
cae9f65c8d7ed74c4ae4b79a3f5e85f2ad5cecf6f772780981fc78353aa974df
616ee7f4c6f7422c48ea0673866223a4e8b41714468c3e6bc05b698f4e1f17a3
4e13c4def0df9885e364b140acad223623b91e920ddd18cd2a694779b7d4f37e
real 4m51.123s
user 0m11.320s
sys 0m12.268s
$ time podman build -q .
--> Using cache 9b574591fcab120dfdcf52ecbb6354661605ea19beaadbc37579205073145673
44d5e0c652e668c577cd84e54ee10857709b9911e1374e342bf99b498ed046f0
180f83a3a2c07a53116e6a22a48262381c8178c88f565a855f85067142c3945a
fd3fb6a0fe00f1f536a33c4f4b2d4a42b4c6011bef5c2f6753d3562ac061c4cc
37d7e86ef305e94ab639ef6a9f50bacf828983cbce46bb171053f48fe459fee9
npm info it worked if it ends with ok
npm info using npm@6.4.1
npm info using node@v8.16.1
npm info prepare initializing installer
npm info prepare Done in 0.033s
npm info extractTree Done in 0.003s
npm info updateJson Done in 0.001s
npm info lifecycle undefined@undefined~preinstall: undefined@undefined
npm info lifecycle undefined@undefined~install: undefined@undefined
npm info lifecycle undefined@undefined~postinstall: undefined@undefined
npm info buildTree Done in 0.003s
npm info garbageCollect Done in 0s
npm info lifecycle undefined@undefined~prepublish: undefined@undefined
npm info runScript Done in 0.001s
npm info lifecycle undefined@undefined~prepare: undefined@undefined
npm info runScript Done in 0s
npm info teardown Done in 0s
npm info run-scripts total script time: 0.003s
npm info run-time total run time: 0.042s
added 0 packages in 0.042s
npm timing npm Completed in 741ms
npm info ok
1c8685182192308e517d085fbdbb5f2e6e83bb6cef91b453c4dac323741f5f8d
9740a1d490dd06edc25cd52a3fd992c7098810d97b5bc2a0a3799e417ea9ea72
f699ec3e38ff680986876008275d81a3ec7adb225a6af62e255e28485cda4c7a
e0052b5acc355383435520e6f5bed8f77097da9ff3c3b61c587f9fb56c245569
real 4m35.198s
user 0m10.698s
sys 0m12.506s
$ time podman build -q .
--> Using cache 9b574591fcab120dfdcf52ecbb6354661605ea19beaadbc37579205073145673
--> Using cache 44d5e0c652e668c577cd84e54ee10857709b9911e1374e342bf99b498ed046f0
--> Using cache 180f83a3a2c07a53116e6a22a48262381c8178c88f565a855f85067142c3945a
--> Using cache fd3fb6a0fe00f1f536a33c4f4b2d4a42b4c6011bef5c2f6753d3562ac061c4cc
--> Using cache 37d7e86ef305e94ab639ef6a9f50bacf828983cbce46bb171053f48fe459fee9
--> Using cache 1c8685182192308e517d085fbdbb5f2e6e83bb6cef91b453c4dac323741f5f8d
--> Using cache 9740a1d490dd06edc25cd52a3fd992c7098810d97b5bc2a0a3799e417ea9ea72
--> Using cache f699ec3e38ff680986876008275d81a3ec7adb225a6af62e255e28485cda4c7a
--> Using cache e0052b5acc355383435520e6f5bed8f77097da9ff3c3b61c587f9fb56c245569
real 2m58.452s
user 0m4.189s
sys 0m8.889s
Describe the results you received:
podman build builds images much slower than docker, with or without a cache. I also had high CPU and IO usage.
Describe the results you expected:
I would’ve expected results comparable to docker.
I also found it interesting that the first time I ran podman build without --no-cache, it only seemed to use the cache for one layer. I had to run it a second time for it to use the cache for all layers. Docker doesn’t seem to exhibit this behaviour.
Output of rpm -q buildah or apt list buildah:
$ apt list buildah
Listing... Done
buildah/bionic,now 1.10.1-1~ubuntu18.04~ppa1 amd64 [installed]
Output of buildah version:
Version: 1.10.1
Go Version: go1.10.4
Image Spec: 1.0.1
Runtime Spec: 1.0.1-dev
CNI Spec: 0.4.0
libcni Version:
Git Commit:
Built: Thu Aug 8 16:29:48 2019
OS/Arch: linux/amd64
Output of podman version if reporting a podman build issue:
Version: 1.5.1
RemoteAPI Version: 1
Go Version: go1.10.4
OS/Arch: linux/amd64
Output of cat /etc/*release:
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
Output of uname -a:
Linux host 5.0.0-27-generic #28~18.04.1-Ubuntu SMP Thu Aug 22 03:00:32 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Output of cat /etc/containers/storage.conf:
# storage.conf is the configuration file for all tools
# that share the containers/storage libraries
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]
# Default Storage Driver
driver = "overlay"
# Temporary storage location
runroot = "/var/run/containers/storage"
# Primary read-write location of container storage
graphroot = "/var/lib/containers/storage"
[storage.options]
# AdditionalImageStores is used to pass paths to additional read-only image stores
# Must be comma separated list.
additionalimagestores = [
]
# Size is used to set a maximum size of the container image. Only supported by
# certain container storage drivers (currently overlay, zfs, vfs, btrfs)
size = ""
# OverrideKernelCheck tells the driver to ignore kernel checks based on kernel version
override_kernel_check = "true"
Thanks!
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 27 (12 by maintainers)
Commits related to this issue
- Reenable docker role due to multiple issues with podman https://github.com/containers/libpod/issues/3304 https://github.com/containers/libpod/issues/3982 https://github.com/containers/buildah/issues/... — committed to bmaupin/configs by bmaupin 5 years ago
- remove .dockerignore file This hasn't brought a lot of benefit; and by switching to bundles, we need to include the deploy/directory anyway. Also, from https://github.com/containers/buildah/issues/1... — committed to JAORMX/compliance-operator by JAORMX 4 years ago
- remove .dockerignore file This hasn't brought a lot of benefit; and by switching to bundles, we need to include the deploy/directory anyway. Also, from https://github.com/containers/buildah/issues/1... — committed to JAORMX/compliance-operator by JAORMX 4 years ago
- remove .dockerignore file This hasn't brought a lot of benefit; and by switching to bundles, we need to include the deploy/directory anyway. Also, from https://github.com/containers/buildah/issues/1... — committed to jhrozek/compliance-operator-1 by JAORMX 4 years ago
You are using vfs.
GraphDriverName: vfs
To get to fuse-ovleray, you need to remove all of your storage and config files and start again.
$ sudo dnf install -y fuse-overlayfs $ rm -rf ~/.config/containers ~/.local/share/containers $ podman info
Now you should be using fuse-overlayfs.
We know we are slow with Huge COPY commands with .dockerignore files. @nalind is working on a full rewrite of this code.
podman 1.4.4 is an ancient version of podman. Podman 1.6 should be available on RHEL7 at this point. podman 1.9.* should be the next release on RHEL8 coming out this summer.
It didn’t handle .dockerignore correctly, mainly, and the fix for that was bolted on in a way that slowed things down considerably. That reworking landed in 1.16, though. Extracting layer contents when using overlay should have improved when https://github.com/containers/storage/pull/631 was merged.
Well we know we have issues with COPY and ADD, which we are working on. Other then that, we have lots of strategies to go way faster then Docker for certain workloads.
podmanandbuildahare both generally slower at builds at every stage of the build process, AFAICT. I don’t have the time to wait for those builds. 😦If comparable-or-greater speed is a priority, show us your benchmarks vs.
dockeron a variety of popular containers using standard hardware like AWSm5.xlarge. I’m giving the benefit of the doubt that an optimization pass is planned but just hasn’t happened yet, since it’s still really early.I think that this is related to the fact that the
COPYinstruction, when the.dockerignorefile is present, copies the files one by one by using the following snippet.In my case, with a
node_modulesdirectory, it takes ages because it copies the files one by one:I’m looking for a solution to this problem, ideally we should tar the file by using an exclusion policy (like
tar --exclude) instead of excluding the files, creating a file list andtaring the files one by one