podman: fresh install runc not working
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Fresh install RHEL 8.6 SELINUX disabled until able to get podman working. Can’t run any containers, the containers do get created but then do not run.
Steps to reproduce the issue:
$ podman run -it --log-level=debug --net=host k8s.gcr.io/busybox sh
DEBU[0000] ExitCode msg: "runc: time=\"2022-08-23t17:33:12-04:00\" level=fatal msg=\"nsexec[15496]: could not ensure we are a cloned binary: operation not permitted\"\ntime=\"2022-08-23t17:33:12-04:00\" level=error msg=\"runc create failed: unable to start container process: waiting for init preliminary setup: read init-p: connection reset by peer\": oci permission denied"
Error: runc: time="2022-08-23T17:33:12-04:00" level=fatal msg="nsexec[15496]: could not ensure we are a cloned binary: Operation not permitted"
time="2022-08-23T17:33:12-04:00" level=error msg="runc create failed: unable to start container process: waiting for init preliminary setup: read init-p: connection reset by peer": OCI permission denied
simultaneously, i’m monitoring with $ sudo journalctl --follow
Aug 23 17:33:12 COMPNAME /usr/bin/podman[15498]: time="2022-08-23T17:33:12-04:00" level=debug msg="Initializing event backend file"
Aug 23 17:33:12 COMPNAME /usr/bin/podman[15498]: time="2022-08-23T17:33:12-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
Aug 23 17:33:12 COMPNAME /usr/bin/podman[15498]: time="2022-08-23T17:33:12-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
Aug 23 17:33:12 COMPNAME /usr/bin/podman[15498]: time="2022-08-23T17:33:12-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
Aug 23 17:33:12 COMPNAME /usr/bin/podman[15498]: time="2022-08-23T17:33:12-04:00" level=debug msg="Configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument"
Aug 23 17:33:12 COMPNAME /usr/bin/podman[15498]: time="2022-08-23T17:33:12-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\""
- Literally any container will not run
$ podman run --name test --replace -d registry.access.redhat.com/rhel7-init:latest && sleep 10 && podman exec test systemctl status
Error: OCI runtime error: runc: you have no read access to runc binary file
runc create failed: unable to start container process: waiting for init preliminary setup: read init-p: connection reset by peer
- trying different runtimes?
```bash
$ sudo dnf install crun
$ podman run --runtime crun -it --log-level=debug --net=host k8s.gcr.io/busybox sh
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman run --runtime crun -it --log-level=debug --net=host k8s.gcr.io/busybox sh)
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/MYUSERNAME/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Overriding run root "/run/user/1003" with "/run/user/1003/containers" from database
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/MYUSERNAME/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1003/containers
DEBU[0000] Using static dir /home/MYUSERNAME/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1003/libpod/tmp
DEBU[0000] Using volume path /home/MYUSERNAME/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Setting parallel job count to 337
DEBU[0000] Pulling image k8s.gcr.io/busybox (policy: missing)
DEBU[0000] Looking up image "k8s.gcr.io/busybox" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "k8s.gcr.io/busybox:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/MYUSERNAME/.local/share/containers/storage+/run/user/1003/containers]@e7d168d7db455c45f4d0315d89dbd18806df4784f803c3cc99f8a2e250585b5b"
DEBU[0000] Found image "k8s.gcr.io/busybox" as "k8s.gcr.io/busybox:latest" in local containers storage
DEBU[0000] Found image "k8s.gcr.io/busybox" as "k8s.gcr.io/busybox:latest" in local containers storage ([overlay@/home/MYUSERNAME/.local/share/containers/storage+/run/user/1003/containers]@e7d168d7db455c45f4d0315d89dbd18806df4784f803c3cc99f8a2e250585b5b)
DEBU[0000] Looking up image "k8s.gcr.io/busybox:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "k8s.gcr.io/busybox:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/MYUSERNAME/.local/share/containers/storage+/run/user/1003/containers]@e7d168d7db455c45f4d0315d89dbd18806df4784f803c3cc99f8a2e250585b5b"
DEBU[0000] Found image "k8s.gcr.io/busybox:latest" as "k8s.gcr.io/busybox:latest" in local containers storage
DEBU[0000] Found image "k8s.gcr.io/busybox:latest" as "k8s.gcr.io/busybox:latest" in local containers storage ([overlay@/home/MYUSERNAME/.local/share/containers/storage+/run/user/1003/containers]@e7d168d7db455c45f4d0315d89dbd18806df4784f803c3cc99f8a2e250585b5b)
DEBU[0000] Looking up image "k8s.gcr.io/busybox" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "k8s.gcr.io/busybox:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/MYUSERNAME/.local/share/containers/storage+/run/user/1003/containers]@e7d168d7db455c45f4d0315d89dbd18806df4784f803c3cc99f8a2e250585b5b"
DEBU[0000] Found image "k8s.gcr.io/busybox" as "k8s.gcr.io/busybox:latest" in local containers storage
DEBU[0000] Found image "k8s.gcr.io/busybox" as "k8s.gcr.io/busybox:latest" in local containers storage ([overlay@/home/MYUSERNAME/.local/share/containers/storage+/run/user/1003/containers]@e7d168d7db455c45f4d0315d89dbd18806df4784f803c3cc99f8a2e250585b5b)
DEBU[0000] Inspecting image e7d168d7db455c45f4d0315d89dbd18806df4784f803c3cc99f8a2e250585b5b
DEBU[0000] Inspecting image e7d168d7db455c45f4d0315d89dbd18806df4784f803c3cc99f8a2e250585b5b
DEBU[0000] Inspecting image e7d168d7db455c45f4d0315d89dbd18806df4784f803c3cc99f8a2e250585b5b
DEBU[0000] Inspecting image e7d168d7db455c45f4d0315d89dbd18806df4784f803c3cc99f8a2e250585b5b
DEBU[0000] using systemd mode: false
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
INFO[0000] Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host
DEBU[0000] Allocated lock 15 for container 5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e
DEBU[0000] parsed reference into "[overlay@/home/MYUSERNAME/.local/share/containers/storage+/run/user/1003/containers]@e7d168d7db455c45f4d0315d89dbd18806df4784f803c3cc99f8a2e250585b5b"
DEBU[0000] Cached value indicated that overlay is not supported
DEBU[0000] Check for idmapped mounts support
DEBU[0000] Created container "5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e"
DEBU[0000] Container "5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e" has work directory "/home/MYUSERNAME/.local/share/containers/storage/overlay-containers/5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e/userdata"
DEBU[0000] Container "5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e" has run directory "/run/user/1003/containers/overlay-containers/5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e/userdata"
DEBU[0000] Handling terminal attach
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] overlay: mount_data=lowerdir=/home/MYUSERNAME/.local/share/containers/storage/overlay/l/7YRQJBUXV3RX76OJUKSQJNALM6:/home/MYUSERNAME/.local/share/containers/storage/overlay/l/VJKREXGSBBYSX2IECHHPMABLFN:/home/MYUSERNAME/.local/share/containers/storage/overlay/l/43UHPP2QCB3YRSY24O52URR6SP:/home/MYUSERNAME/.local/share/containers/storage/overlay/l/FQ5YYOFSRR6O7WJB5TTKXZ4LUL,upperdir=/home/MYUSERNAME/.local/share/containers/storage/overlay/1f41826ddf34fbf7c9aee628bfde00374a911699044e036465e03b3ee35ad851/diff,workdir=/home/MYUSERNAME/.local/share/containers/storage/overlay/1f41826ddf34fbf7c9aee628bfde00374a911699044e036465e03b3ee35ad851/work,,userxattr,context="system_u:object_r:container_file_t:s0:c161,c286"
DEBU[0000] Mounted container "5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e" at "/home/MYUSERNAME/.local/share/containers/storage/overlay/1f41826ddf34fbf7c9aee628bfde00374a911699044e036465e03b3ee35ad851/merged"
DEBU[0000] Created root filesystem for container 5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e at /home/MYUSERNAME/.local/share/containers/storage/overlay/1f41826ddf34fbf7c9aee628bfde00374a911699044e036465e03b3ee35ad851/merged
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription
DEBU[0000] Setting Cgroups for container 5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e to user.slice:libpod:5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] added hook /usr/share/containers/oci/hooks.d/oci-nvidia-hook.json
DEBU[0000] hook oci-nvidia-hook.json matched; adding to stages [prestart]
DEBU[0000] Workdir "/" resolved to host path "/home/MYUSERNAME/.local/share/containers/storage/overlay/1f41826ddf34fbf7c9aee628bfde00374a911699044e036465e03b3ee35ad851/merged"
DEBU[0000] Created OCI spec for container 5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e at /home/MYUSERNAME/.local/share/containers/storage/overlay-containers/5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c 5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e -u 5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e -r /usr/bin/crun -b /home/MYUSERNAME/.local/share/containers/storage/overlay-containers/5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e/userdata -p /run/user/1003/containers/overlay-containers/5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e/userdata/pidfile -n awesome_herschel --exit-dir /run/user/1003/libpod/tmp/exits --full-attach -s -l k8s-file:/home/MYUSERNAME/.local/share/containers/storage/overlay-containers/5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e/userdata/ctr.log --log-level debug --syslog -t --conmon-pidfile /run/user/1003/containers/overlay-containers/5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/MYUSERNAME/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1003/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1003/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/MYUSERNAME/.local/share/containers/storage/volumes --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e]"
INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e.scope
DEBU[0000] Received: -1
Failed to re-execute libcrun via memory file descriptor
ERRO[0000] Removing container 5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e from runtime after creation failed
DEBU[0000] Cleaning up container 5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] Unmounted container "5d732f4efaf61e384ef85ef134e878608b1a06a41b2f5612a83ab22f147dea6e"
DEBU[0000] ExitCode msg: "crun: failed to re-execute libcrun via memory file descriptor: oci runtime error"
Error: OCI runtime error: crun: Failed to re-execute libcrun via memory file descriptor
**Describe the results you received:**
`exec[15496]: could not ensure we are a cloned binary: operation not permitted`
or with `crun` runtime
```bash
DEBU[0000] ExitCode msg: "crun: failed to re-execute libcrun via memory file descriptor: oci runtime error"
Error: OCI runtime error: crun: Failed to re-execute libcrun via memory file descriptor
Describe the results you expected: expected containers to just run, i’m running the simplest of containers and been doing this for years…
Additional information you deem important (e.g. issue happens only occasionally):
Output of podman version:
$ podman version
Client: Podman Engine
Version: 4.1.1
API Version: 4.1.1
Go Version: go1.17.7
Built: Mon Jul 11 10:56:53 2022
OS/Arch: linux/amd64
Output of podman info:
$ podman info
host:
arch: amd64
buildahVersion: 1.26.2
cgroupControllers:
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.2-2.module+el8.6.0+15917+093ca6f8.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.2, commit: 8c4f33ac0dcf558874b453d5027028b18d1502db'
cpuUtilization:
idlePercent: 99.86
systemPercent: 0.06
userPercent: 0.08
cpus: 112
distribution:
distribution: '"rhel"'
version: "8.6"
eventLogger: file
hostname: COMPNAME
idMappings:
gidmap:
- container_id: 0
host_id: 1004
size: 1
- container_id: 1
host_id: 296608
size: 65536
uidmap:
- container_id: 0
host_id: 1003
size: 1
- container_id: 1
host_id: 296608
size: 65536
kernel: 4.18.0-372.19.1.el8_6.x86_64
linkmode: dynamic
logDriver: k8s-file
memFree: 802754818048
memTotal: 810077761536
networkBackend: cni
ociRuntime:
name: runc
package: runc-1.1.3-2.module+el8.6.0+15917+093ca6f8.x86_64
path: /usr/bin/runc
version: |-
runc version 1.1.3
spec: 1.0.2-dev
go: go1.17.7
libseccomp: 2.5.2
os: linux
remoteSocket:
path: /run/user/1003/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.0-2.module+el8.6.0+15917+093ca6f8.x86_64
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.2
swapFree: 34359734272
swapTotal: 34359734272
uptime: 1h 35m 8.95s (Approximately 0.04 days)
plugins:
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.access.redhat.com
- registry.redhat.io
- docker.io
store:
configFile: /home/MYUSERNAME/.config/containers/storage.conf
containerStore:
number: 12
paused: 0
running: 0
stopped: 12
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/MYUSERNAME/.local/share/containers/storage
graphRootAllocated: 536608768000
graphRootUsed: 5120098304
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 7
runRoot: /run/user/1003/containers
volumePath: /home/MYUSERNAME/.local/share/containers/storage/volumes
version:
APIVersion: 4.1.1
Built: 1657551413
BuiltTime: Mon Jul 11 10:56:53 2022
GitCommit: ""
GoVersion: go1.17.7
Os: linux
OsArch: linux/amd64
Version: 4.1.1
Package info (e.g. output of rpm -q podman or apt list podman):
$ rpm -q podman
podman-4.1.1-2.module+el8.6.0+15917+093ca6f8.x86_64
Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
Yes - I have poured over that thing past couple days. I’ve fixed cgroups, kernel boot command lines, cgroups CPU namespaces,
Additional environment details (AWS, VirtualBox, physical, etc.): physical
similar recent error here https://github.com/containers/podman/issues/15432
$ stat -c %T -f /sys/fs/cgroup
cgroup2fs
$ cat /sys/fs/cgroup/cgroup.subtree_control
cpuset io memory hugetlb pids rdma
$ cat /sys/fs/cgroup/cgroup.subtree_control
cpuset io memory hugetlb pids rdma
on my OTHER RHEL 8.6 machine that works fine
$ cat /sys/fs/cgroup/cgroup.subtree_control
memory pids
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 17 (9 by maintainers)
SELinux is enabled. Try
$sudo dnf -y reinstall container-selinux $ restorecon -R -f $HOME
And it should start working.