podman: [rootless] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied

[//]: # kind bug

Description

rootless container run see’s following error in debug output:

WARN[0030] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
DEBU[0030] Cleaning up container c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86
DEBU[0030] Network is already cleaned up, skipping...
DEBU[0030] unmounted container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"
ERRO[0030] error reading container (probably exited) json message: EOF

Full output below:

[vagrant@vanilla-rawhide-atomic srv]$ alias cass='podman --log-level debug run --rm -ti -v ${PWD}:/srv/ ${COREOS_ASSEMBLER_CONFIG_GIT:+-v  $COREOS_ASSEMBLER_CONFIG_GIT:/srv/src/config/:ro} ${COREOS_ASSEMBLER_GIT:+-v $COREOS_ASSEMBLER_GIT/src/:/usr/lib/coreos-assembler/:ro} --workdir /srv --device /dev/kvm ca'                    
[vagrant@vanilla-rawhide-atomic srv]$
[vagrant@vanilla-rawhide-atomic srv]$ cass init
INFO[0000] running as rootless
DEBU[0000] Not configuring container store
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] Initializing boltdb state at /var/home/vagrant/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Set libpod namespace to ""
WARN[0000] AppArmor security is not available in rootless mode
DEBU[0000] User mount /srv:/srv/ options []
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/fedora-coreos-config/:/srv/src/config/ options [ro]
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/coreos-assembler//src/:/usr/lib/coreos-assembler/ options [ro]
DEBU[0000] Using bridge netmode
DEBU[0000] User mount /srv:/srv/ options []
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/fedora-coreos-config/:/srv/src/config/ options [ro]
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/coreos-assembler//src/:/usr/lib/coreos-assembler/ options [ro]
DEBU[0000] Adding mount /proc
DEBU[0000] Adding mount /dev
DEBU[0000] Adding mount /dev/shm
DEBU[0000] Adding mount /dev/mqueue
DEBU[0000] Adding mount /sys
DEBU[0000] Adding mount /dev/pts
DEBU[0000] Adding mount /sys/fs/cgroup
DEBU[0000] Adding mount /run
DEBU[0000] Adding mount /run/lock
DEBU[0000] Adding mount /sys/fs/cgroup/systemd
DEBU[0000] Adding mount /tmp
DEBU[0000] Adding mount /var/log/journal
INFO[0000] running as rootless
DEBU[0000] [graphdriver] trying provided driver "vfs"
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] Initializing boltdb state at /var/home/vagrant/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Set libpod namespace to ""
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]docker.io/library/ca:latest"
DEBU[0000] reference "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]docker.io/library/ca:latest" does not resolve to an image ID
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]docker.io/library/ca:latest"
DEBU[0000] reference "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]docker.io/library/ca:latest" does not resolve to an image ID
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]localhost/ca:latest"
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
WARN[0000] AppArmor security is not available in rootless mode
DEBU[0000] Using bridge netmode
DEBU[0000] User mount /srv:/srv/ options []
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/fedora-coreos-config/:/srv/src/config/ options [ro]
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/coreos-assembler//src/:/usr/lib/coreos-assembler/ options [ro]
DEBU[0000] Adding mount /proc
DEBU[0000] Adding mount /dev
DEBU[0000] Adding mount /dev/shm
DEBU[0000] Adding mount /dev/mqueue
DEBU[0000] Adding mount /sys
DEBU[0000] Adding mount /dev/pts
DEBU[0000] Adding mount /sys/fs/cgroup
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] Creating dest directory: /var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c
DEBU[0000] Calling TarUntar(/var/home/vagrant/.local/share/containers/storage/vfs/dir/d35c76dfa49441e23821e2e91c12c629997fa11ce714b110dad956f7cabed6dc, /var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c)                                                       
DEBU[0000] TarUntar(/var/home/vagrant/.local/share/containers/storage/vfs/dir/d35c76dfa49441e23821e2e91c12c629997fa11ce714b110dad956f7cabed6dc /var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c)                                                                
DEBU[0030] created container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"
DEBU[0030] container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86" has work directory "/var/home/vagrant/.local/share/containers/storage/vfs-containers/c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86/userdata"                                                                                   
DEBU[0030] container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86" has run directory "/run/user/1000/run/vfs-containers/c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86/userdata"                                                                                                                   
DEBU[0030] New container created "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"
DEBU[0030] container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86" has CgroupParent "/libpod_parent/libpod-c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"                                                                                                                                         
DEBU[0030] Handling terminal attach
DEBU[0030] mounted container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86" at "/var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c"                                                                                                           
DEBU[0030] Created root filesystem for container c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86 at /var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c                                                                                           
WARN[0030] error mounting secrets, skipping: getting host secret data failed: failed to read secrets from "/usr/share/rhel/secrets": open /usr/share/rhel/secrets: permission denied
DEBU[0030] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0030] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] Created OCI spec for container c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86 at /var/home/vagrant/.local/share/containers/storage/vfs-containers/c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86/userdata/config.json                                                                      
DEBU[0030] /usr/libexec/crio/conmon messages will be logged to syslog
DEBU[0030] running conmon: /usr/libexec/crio/conmon      args=[-c c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86 -u c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86 -r /usr/bin/runc -b /var/home/vagrant/.local/share/containers/storage/vfs-containers/c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86/userdata -p /run/user/1000/ru]
WARN[0030] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
DEBU[0030] Cleaning up container c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86
DEBU[0030] Network is already cleaned up, skipping...
DEBU[0030] unmounted container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"
ERRO[0030] error reading container (probably exited) json message: EOF

Steps to reproduce the issue:

  1. boot rawhide VM

  2. rootless podman build -t ca with Dockerfile/context from: https://github.com/dustymabe/coreos-assembler/tree/7cd95023aa0d7f6ccee2e57f6006e8e9978313f8 – (this takes a lot of space - see buildah issue

  3. try to run container using built image

Describe the results you received:

error

Describe the results you expected: no error

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

[vagrant@vanilla-rawhide-atomic srv]$ rpm -q podman
podman-0.9.4-1.dev.gitaf791f3.fc30.x86_64
[vagrant@vanilla-rawhide-atomic srv]$ podman version
Version:       0.9.4-dev
Go Version:    go1.11
OS/Arch:       linux/amd64

Output of podman info:

[vagrant@vanilla-rawhide-atomic srv]$ podman info
host:
  Conmon:
    package: conmon-1.12.0-12.dev.gitc4f232a.fc29.x86_64
    path: /usr/libexec/crio/conmon
    version: 'conmon version 1.12.0-dev, commit: ed74efc8af284f786e041e8a98a910db4b2c0ec7'
  MemFree: 143093760
  MemTotal: 4133531648
  OCIRuntime:
    package: runc-1.0.0-54.dev.git00dc700.fc30.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc5+dev
      commit: b96b63adc3dd5b354bb2a39bb8cc4659f979c0a4
      spec: 1.0.0
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 4
  hostname: vanilla-rawhide-atomic
  kernel: 4.19.0-0.rc5.git0.1.fc30.x86_64
  os: linux
  uptime: 4h 4m 57.82s (Approximately 0.17 days)
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ContainerStore:
    number: 2
  GraphDriverName: vfs
  GraphOptions: []
  GraphRoot: /var/home/vagrant/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 12
  RunRoot: /run/user/1000/run

Additional environment details (AWS, VirtualBox, physical, etc.):

vagrant libvirt rawhide atomic host VM

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 48 (33 by maintainers)

Most upvoted comments

I have podman embedded in a CI setup and use different log-levels tuned by a global setting, but I opted the default to be warning, as most warnings are lingering issues. This one is tainting the setup and I had to return to error level.

Warnings should be cast for potential errors or misconfigurations IMHO, not for future development.

Hi Giuseppe, this (cosmetic) warning does cause confusion at customers that consider this warning to be the cause of later failures (which have different causes). I find myself in the need to lower the log level on some podman operations to ease adoption. Could we make a different priority/wording or a smarter detection to not issue a warning if there are no cgroups V2?

@mheon @ansemjo patch here: https://github.com/containers/libpod/pull/1761

From a quick test, systemd boots in the rootless container:

$ bin/podman run --rm -it docker.io/fedora /sbin/init
systemd 238 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
Detected virtualization container-other.
Detected architecture x86-64.

Welcome to Fedora 28 (Twenty Eight)!

Set hostname to <helium>.
Initializing machine ID from random generator.
Couldn't move remaining userspace processes, ignoring: Input/output error
File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  OK  ] Listening on Journal Socket (/dev/log).
[  OK  ] Started Forward Password Requests to Wall Directory Watch.
[  OK  ] Listening on Process Core Dump Socket.
[  OK  ] Listening on /dev/initctl Compatibility Named Pipe.
[  OK  ] Started Dispatch Password Requests to Console Directory Watch.
[  OK  ] Reached target Slices.
[  OK  ] Reached target Swap.
[  OK  ] Reached target Remote File Systems.
[  OK  ] Listening on Journal Socket.
         Starting Journal Service...
         Starting Create System Users...
[  OK  ] Reached target Paths.
[  OK  ] Reached target Local File Systems.
         Starting Rebuild Dynamic Linker Cache...
         Starting Rebuild Journal Catalog...
[  OK  ] Started Journal Service.
[  OK  ] Started Create System Users.
         Starting Flush Journal to Persistent Storage...
[  OK  ] Started Rebuild Journal Catalog.
[  OK  ] Started Flush Journal to Persistent Storage.
         Starting Create Volatile Files and Directories...
[  OK  ] Started Rebuild Dynamic Linker Cache.
         Starting Update is Completed...
[  OK  ] Started Update is Completed.
[  OK  ] Started Create Volatile Files and Directories.
         Starting Update UTMP about System Boot/Shutdown...
[  OK  ] Started Update UTMP about System Boot/Shutdown.
[  OK  ] Reached target System Initialization.
[  OK  ] Listening on D-Bus System Message Bus Socket.
[  OK  ] Reached target Sockets.
[  OK  ] Started dnf makecache timer.
[  OK  ] Reached target Basic System.
         Starting Permit User Sessions...
[  OK  ] Started D-Bus System Message Bus.
[  OK  ] Started Daily Cleanup of Temporary Directories.
[  OK  ] Reached target Timers.
[  OK  ] Started Permit User Sessions.
[  OK  ] Reached target Multi-User System.
         Starting Update UTMP about System Runlevel Changes...
[  OK  ] Started Update UTMP about System Runlevel Changes.
         Unmounting /var/log/journal...
[  OK  ] Stopped target Multi-User System.
         Stopping D-Bus System Message Bus...
[  OK  ] Stopped target Timers.
         Stopping Permit User Sessions...
[  OK  ] Stopped Daily Cleanup of Temporary Directories.
[  OK  ] Stopped D-Bus System Message Bus.

(If you actually want rootless systemd containers, they will probably not work, but at least we can give a better error message…)

So can I close this issue? @dustymabe ?