podman: [rootless] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
[//]: # kind bug
Description
rootless container run see’s following error in debug output:
WARN[0030] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
DEBU[0030] Cleaning up container c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86
DEBU[0030] Network is already cleaned up, skipping...
DEBU[0030] unmounted container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"
ERRO[0030] error reading container (probably exited) json message: EOF
Full output below:
[vagrant@vanilla-rawhide-atomic srv]$ alias cass='podman --log-level debug run --rm -ti -v ${PWD}:/srv/ ${COREOS_ASSEMBLER_CONFIG_GIT:+-v $COREOS_ASSEMBLER_CONFIG_GIT:/srv/src/config/:ro} ${COREOS_ASSEMBLER_GIT:+-v $COREOS_ASSEMBLER_GIT/src/:/usr/lib/coreos-assembler/:ro} --workdir /srv --device /dev/kvm ca'
[vagrant@vanilla-rawhide-atomic srv]$
[vagrant@vanilla-rawhide-atomic srv]$ cass init
INFO[0000] running as rootless
DEBU[0000] Not configuring container store
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] Initializing boltdb state at /var/home/vagrant/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Set libpod namespace to ""
WARN[0000] AppArmor security is not available in rootless mode
DEBU[0000] User mount /srv:/srv/ options []
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/fedora-coreos-config/:/srv/src/config/ options [ro]
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/coreos-assembler//src/:/usr/lib/coreos-assembler/ options [ro]
DEBU[0000] Using bridge netmode
DEBU[0000] User mount /srv:/srv/ options []
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/fedora-coreos-config/:/srv/src/config/ options [ro]
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/coreos-assembler//src/:/usr/lib/coreos-assembler/ options [ro]
DEBU[0000] Adding mount /proc
DEBU[0000] Adding mount /dev
DEBU[0000] Adding mount /dev/shm
DEBU[0000] Adding mount /dev/mqueue
DEBU[0000] Adding mount /sys
DEBU[0000] Adding mount /dev/pts
DEBU[0000] Adding mount /sys/fs/cgroup
DEBU[0000] Adding mount /run
DEBU[0000] Adding mount /run/lock
DEBU[0000] Adding mount /sys/fs/cgroup/systemd
DEBU[0000] Adding mount /tmp
DEBU[0000] Adding mount /var/log/journal
INFO[0000] running as rootless
DEBU[0000] [graphdriver] trying provided driver "vfs"
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] Initializing boltdb state at /var/home/vagrant/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Set libpod namespace to ""
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]docker.io/library/ca:latest"
DEBU[0000] reference "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]docker.io/library/ca:latest" does not resolve to an image ID
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]docker.io/library/ca:latest"
DEBU[0000] reference "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]docker.io/library/ca:latest" does not resolve to an image ID
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]localhost/ca:latest"
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
WARN[0000] AppArmor security is not available in rootless mode
DEBU[0000] Using bridge netmode
DEBU[0000] User mount /srv:/srv/ options []
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/fedora-coreos-config/:/srv/src/config/ options [ro]
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/coreos-assembler//src/:/usr/lib/coreos-assembler/ options [ro]
DEBU[0000] Adding mount /proc
DEBU[0000] Adding mount /dev
DEBU[0000] Adding mount /dev/shm
DEBU[0000] Adding mount /dev/mqueue
DEBU[0000] Adding mount /sys
DEBU[0000] Adding mount /dev/pts
DEBU[0000] Adding mount /sys/fs/cgroup
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] Creating dest directory: /var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c
DEBU[0000] Calling TarUntar(/var/home/vagrant/.local/share/containers/storage/vfs/dir/d35c76dfa49441e23821e2e91c12c629997fa11ce714b110dad956f7cabed6dc, /var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c)
DEBU[0000] TarUntar(/var/home/vagrant/.local/share/containers/storage/vfs/dir/d35c76dfa49441e23821e2e91c12c629997fa11ce714b110dad956f7cabed6dc /var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c)
DEBU[0030] created container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"
DEBU[0030] container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86" has work directory "/var/home/vagrant/.local/share/containers/storage/vfs-containers/c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86/userdata"
DEBU[0030] container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86" has run directory "/run/user/1000/run/vfs-containers/c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86/userdata"
DEBU[0030] New container created "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"
DEBU[0030] container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86" has CgroupParent "/libpod_parent/libpod-c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"
DEBU[0030] Handling terminal attach
DEBU[0030] mounted container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86" at "/var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c"
DEBU[0030] Created root filesystem for container c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86 at /var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c
WARN[0030] error mounting secrets, skipping: getting host secret data failed: failed to read secrets from "/usr/share/rhel/secrets": open /usr/share/rhel/secrets: permission denied
DEBU[0030] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0030] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] Created OCI spec for container c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86 at /var/home/vagrant/.local/share/containers/storage/vfs-containers/c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86/userdata/config.json
DEBU[0030] /usr/libexec/crio/conmon messages will be logged to syslog
DEBU[0030] running conmon: /usr/libexec/crio/conmon args=[-c c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86 -u c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86 -r /usr/bin/runc -b /var/home/vagrant/.local/share/containers/storage/vfs-containers/c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86/userdata -p /run/user/1000/ru]
WARN[0030] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
DEBU[0030] Cleaning up container c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86
DEBU[0030] Network is already cleaned up, skipping...
DEBU[0030] unmounted container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"
ERRO[0030] error reading container (probably exited) json message: EOF
Steps to reproduce the issue:
-
boot rawhide VM
-
rootless podman build -t ca with Dockerfile/context from: https://github.com/dustymabe/coreos-assembler/tree/7cd95023aa0d7f6ccee2e57f6006e8e9978313f8 – (this takes a lot of space - see buildah issue
-
try to run container using built image
Describe the results you received:
error
Describe the results you expected: no error
Additional information you deem important (e.g. issue happens only occasionally):
Output of podman version:
[vagrant@vanilla-rawhide-atomic srv]$ rpm -q podman
podman-0.9.4-1.dev.gitaf791f3.fc30.x86_64
[vagrant@vanilla-rawhide-atomic srv]$ podman version
Version: 0.9.4-dev
Go Version: go1.11
OS/Arch: linux/amd64
Output of podman info:
[vagrant@vanilla-rawhide-atomic srv]$ podman info
host:
Conmon:
package: conmon-1.12.0-12.dev.gitc4f232a.fc29.x86_64
path: /usr/libexec/crio/conmon
version: 'conmon version 1.12.0-dev, commit: ed74efc8af284f786e041e8a98a910db4b2c0ec7'
MemFree: 143093760
MemTotal: 4133531648
OCIRuntime:
package: runc-1.0.0-54.dev.git00dc700.fc30.x86_64
path: /usr/bin/runc
version: |-
runc version 1.0.0-rc5+dev
commit: b96b63adc3dd5b354bb2a39bb8cc4659f979c0a4
spec: 1.0.0
SwapFree: 0
SwapTotal: 0
arch: amd64
cpus: 4
hostname: vanilla-rawhide-atomic
kernel: 4.19.0-0.rc5.git0.1.fc30.x86_64
os: linux
uptime: 4h 4m 57.82s (Approximately 0.17 days)
insecure registries:
registries: []
registries:
registries:
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.access.redhat.com
- registry.centos.org
store:
ContainerStore:
number: 2
GraphDriverName: vfs
GraphOptions: []
GraphRoot: /var/home/vagrant/.local/share/containers/storage
GraphStatus: {}
ImageStore:
number: 12
RunRoot: /run/user/1000/run
Additional environment details (AWS, VirtualBox, physical, etc.):
vagrant libvirt rawhide atomic host VM
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 48 (33 by maintainers)
I have podman embedded in a CI setup and use different log-levels tuned by a global setting, but I opted the default to be warning, as most warnings are lingering issues. This one is tainting the setup and I had to return to error level.
Warnings should be cast for potential errors or misconfigurations IMHO, not for future development.
Hi Giuseppe, this (cosmetic) warning does cause confusion at customers that consider this warning to be the cause of later failures (which have different causes). I find myself in the need to lower the log level on some podman operations to ease adoption. Could we make a different priority/wording or a smarter detection to not issue a warning if there are no cgroups V2?
@mheon @ansemjo patch here: https://github.com/containers/libpod/pull/1761
From a quick test, systemd boots in the rootless container:
(If you actually want rootless systemd containers, they will probably not work, but at least we can give a better error message…)
So can I close this issue? @dustymabe ?