buildx: docker buildx run yum install very slow, while docker build is fine

docker buildx run yum install very slow, while docker build is fine

build with these commands bellow, docker build takes about 3min to finish, while docker buildx build takes about 3 hours.

sudo docker build -f centos7.Dockerfile .
sudo docker buildx build -f centos7.Dockerfile --target=artifact --output type=local,dest=$(pwd)/rpms/ .

I think docker buildx should use docker build caches, but it’s not. This is unexpected.

centos7.Dockerfile

FROM centos:centos7 as build
LABEL maintainer="opsdev@qunar.com"

RUN yum install -y yum-utils rpm-build redhat-rpm-config make gcc git vi tar unzip rpmlint wget curl \
    && yum clean all

# 安装 golang
RUN PKG_VERSION="1.15.1" PKG_NAME="go$PKG_VERSION.linux-amd64.tar.gz" \
    && wget https://dl.google.com/go/$PKG_NAME \
    && tar -zxvf $PKG_NAME -C /usr/local \
    && rm -rf $PKG_NAME

ENV GOROOT=/usr/local/go
ENV GOPATH=/home/q/go
ENV PATH=$PATH:/usr/local/go/bin:/home/q/go/bin
ENV GOPROXY=https://goproxy.io

# RUN useradd q -u 5002 -g users -p q
# USER q
ENV HOME /home/q
WORKDIR /home/q
RUN mkdir -p /home/q/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
RUN echo '%_topdir %{getenv:HOME}/rpmbuild' > /home/q/.rpmmacros

COPY spec/q-agentd.spec q-agentd.spec
COPY scripts/q-agentd.service /home/q/rpmbuild/SOURCES/q-agentd.service

RUN yum-builddep -y q-agentd.spec \
    && rpmbuild -bb q-agentd.spec

FROM scratch as artifact
COPY --from=build /home/q/rpmbuild/RPMS/x86_64/*.rpm /

FROM build as release

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Reactions: 19
  • Comments: 17

Commits related to this issue

Most upvoted comments

Thanks @slimm609!

I’ve workarounded it by adding ulimit -n 1024000 just before yum install ... in the Dockerfile. It’s more convenient rather than changing docker daemon service configuration at least in my case.

Example:

RUN ulimit -n 1024000 && yum -y install flex bison make gcc findutils openssl-devel bc diffutils elfutils-devel perl vim openssl dwarves

Hope this helps someone.

I recently ran into this issue as well.

check the systemd service for docker.

It was set to infinity which sets the ulimit to 1073741816 but this seems to be causing a problem for some reason.

LimitNOFILE=infinity

When I changed the limit for nofile to 1024000, it resolved the problem and it works like normal docker build

LimitNOFILE=1024000

you can check the ulimit with a build

FROM alpine:latest
RUN echo "ulimit is $(ulimit -n)"

It is also possible to just set ulimit via the command line, for example:

docker build --ulimit nofile=1024000:1024000 .

In this way Dockerfile don’t need to be changed.

would it be possible for you to provide a link to the issue (I.E. Go 1.19 regression) you mentioned so that the status can easily be tracked?

Here you go (provides links to relevant tracking issues), but it has been resolved since Go 1.19.9 and Go 1.20.4 releases. These are available with Docker Engine 23.0.6 and 24.0.0 releases, while Containerd is releases 1.6.21 and 1.7.1.

Neither Docker Engine (moby) or Containerd projects have accepted PRs for setting LimitNOFILE=1024:524288 instead of LimitNOFILE=infinity yet. You’d still need to manually modify that (a drop-in override would avoid losing the change between updates).

I’ve not confirmed if that resolves docker buildx bake issue, I assume it does since moby bundles buildx now?

Same issue, any update?