podman: podman build fails if dnf inside Dockerfile
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
When running podman build, the process fails as the container OS can’t resolve a repository url The host machine can resolve the url with no problem
Steps to reproduce the issue:
-
Run podman build with a Dockerfile that contains a dnf command
-
the build fails with:
Errors during downloading metadata for repository '[...]':
- Curl error (28): Timeout was reached for [...]repomd.xml [Resolving timed out after 30000 milliseconds]
Error: Failed to download metadata for repo '[...]': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Error: error building at STEP "RUN dnf upgrade -y": error while running runtime: exit status 1
Describe the results you expected: build process with dnf command should finish with no errors
Output of podman version:
Client: Podman Engine
Version: 4.0.2
API Version: 4.0.2
Go Version: go1.17.5
Built: Mon Mar 14 16:13:09 2022
OS/Arch: linux/amd64
Output of podman info --debug:
host:
arch: amd64
buildahVersion: 1.24.1
cgroupControllers:
- memory
- pids
cgroupManager: cgroupfs
cgroupVersion: v2
conmon:
package: conmon-2.1.0-1.el9.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.0, commit: 8ef5de138efb6f0aad657082cdea22cf037792cb'
cpus: 64
distribution:
distribution: '"centos"'
version: "9"
eventLogger: file
hostname: r640-u09.dev2.kni.lab.eng.bos.redhat.com
idMappings:
gidmap:
- container_id: 0
host_id: 1001
size: 1
- container_id: 1
host_id: 165536
size: 65536
uidmap:
- container_id: 0
host_id: 1001
size: 1
- container_id: 1
host_id: 165536
size: 65536
kernel: 5.14.0-71.el9.x86_64
linkmode: dynamic
logDriver: k8s-file
memFree: 296232177664
memTotal: 403758133248
networkBackend: cni
ociRuntime:
name: crun
package: crun-1.4.3-1.el9.x86_64
path: /usr/bin/crun
version: |-
crun version 1.4.3
commit: 61c9600d1335127eba65632731e2d72bc3f0b9e8
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /tmp/podman-run-1001/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.12-4.el9.x86_64
version: |-
slirp4netns version 1.1.12
commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.2
swapFree: 4294963200
swapTotal: 4294963200
uptime: 145h 57m 10.65s (Approximately 6.04 days)
plugins:
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- quay.io
- docker.io
store:
configFile: /home/metalhead/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/metalhead/.local/share/containers/storage
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 4
runRoot: /tmp/podman-run-1001/containers
volumePath: /home/metalhead/.local/share/containers/storage/volumes
version:
APIVersion: 4.0.2
Built: 1647270789
BuiltTime: Mon Mar 14 16:13:09 2022
GitCommit: ""
GoVersion: go1.17.5
OsArch: linux/amd64
Version: 4.0.2
Package info (e.g. output of rpm -q podman or apt list podman):
podman-4.0.2-2.el9.x86_64
Have you tested with the latest version of Podman? No
Have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md) Yes
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 54 (18 by maintainers)
Commits related to this issue
- Use host network when building images In case the host where dev-scripts is running has only localhost as resolver, the dns resolution during build fails as it's removed from /etc/resolv.conf since i... — committed to elfosardo/dev-scripts by elfosardo 2 years ago
- Use host network when building images In case the host where dev-scripts is running has only localhost as resolver, the dns resolution during build fails as it's removed from /etc/resolv.conf since i... — committed to elfosardo/dev-scripts by elfosardo 2 years ago
- use resolvconf package from c/common/libnetwork Podman and Buildah should use the same code the generate the resolv.conf file. This mostly moved the podman code into c/common and created a better API... — committed to Luap99/libpod by Luap99 2 years ago
- use resolvconf package from c/common/libnetwork Podman and Buildah should use the same code the generate the resolv.conf file. This mostly moved the podman code into c/common and created a better API... — committed to Luap99/libpod by Luap99 2 years ago
- use resolvconf package from c/common/libnetwork Podman and Buildah should use the same code the generate the resolv.conf file. This mostly moved the podman code into c/common and created a better API... — committed to Luap99/libpod by Luap99 2 years ago
- use resolvconf package from c/common/libnetwork Podman and Buildah should use the same code the generate the resolv.conf file. This mostly moved the podman code into c/common and created a better API... — committed to Luap99/libpod by Luap99 2 years ago
- use resolvconf package from c/common/libnetwork Podman and Buildah should use the same code the generate the resolv.conf file. This mostly moved the podman code into c/common and created a better API... — committed to Luap99/libpod by Luap99 2 years ago
- use resolvconf package from c/common/libnetwork Podman and Buildah should use the same code the generate the resolv.conf file. This mostly moved the podman code into c/common and created a better API... — committed to Luap99/libpod by Luap99 2 years ago
- use resolvconf package from c/common/libnetwork Podman and Buildah should use the same code the generate the resolv.conf file. This mostly moved the podman code into c/common and created a better API... — committed to Luap99/libpod by Luap99 2 years ago
- use resolvconf package from c/common/libnetwork Podman and Buildah should use the same code the generate the resolv.conf file. This mostly moved the podman code into c/common and created a better API... — committed to vrothberg/libpod by Luap99 2 years ago
- Use host network when building images (#1368) In case the host where dev-scripts is running has only localhost as resolver, the dns resolution during build fails as it's removed from /etc/resolv.conf... — committed to openshift-metal3/dev-scripts by elfosardo 2 years ago
- use resolvconf package from c/common/libnetwork Podman and Buildah should use the same code the generate the resolv.conf file. This mostly moved the podman code into c/common and created a better API... — committed to karthikelango137/podman by Luap99 2 years ago
- use resolvconf package from c/common/libnetwork Podman and Buildah should use the same code the generate the resolv.conf file. This mostly moved the podman code into c/common and created a better API... — committed to gbraad-redhat/podman by Luap99 2 years ago
Ok in this case our behaviour is correct and you have configuration problem. Because it is not possible to reach 127.0.0.1 from the container we remove it from /etc/resolv.conf. Now you have no dns servers left so we will add the default google servers.
Possible solutions:
What dns server do you have running on localhost? Is there some way for podman to read the actual upstream resolvers (we already have such a workaround for systemd-resolved)?
Hi @elfosardo I can’t see your entire container file or setup so would be hard to tell. But could you try a different base image to test if networking is working fine, maybe
ubuntu:latestand doapt-get updateinside container.I think most likely its issue with your container’s
dnfconfig, if networking is working fine for other base images.