podman: 'podman start ' fails with 'setrlimit `RLIMIT_NPROC`: Operation not permitted: OCI permission denied'

Issue Description

Preexisting Toolbx containers can no longer be started after a dnf update on Fedora 38 Workstation.

Highlights from the dnf update:

  • crun-1.8.5-1.fc38.x86_64 to crun-1.8.6-1.fc38.x86_64
  • podman-5:4.5.1-1.fc38.x86_64 to podman-5:4.6.0-1.fc38.x86_64

Toolbx containers are interactive command-line environments that are meant to be long-lasting pet containers. Therefore, it’s important that containers created by older versions of the tools can be used with newer versions.

If necessary, I am happy to change the configuration with which new Toolbx containers are created, but we would need a sufficient migration window for users with pre-existing older containers.

Here’s an attempt to podman start a container created with toolbox create and the older version of the Podman stack:

$ podman --log-level debug start --attach fedora-toolbox-38
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called start.PersistentPreRunE(podman --log-level debug start --attach fedora-toolbox-38) 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/rishi/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/rishi/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/rishi/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/rishi/.local/share/containers/storage/volumes 
DEBU[0000] Using transient store: false                 
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that metacopy is not being used 
DEBU[0000] Cached value indicated that native-diff is usable 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument 
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument 
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument 
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument 
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument 
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Setting parallel job count to 49             
INFO[0000] Received shutdown.Stop(), terminating!        PID=69527
DEBU[0000] Enabling signal proxying                     
DEBU[0000] Cached value indicated that idmapped mounts for overlay are not supported 
DEBU[0000] Check for idmapped mounts support            
DEBU[0000] overlay: mount_data=lowerdir=/home/rishi/.local/share/containers/storage/overlay/l/B6GL45D333VZL42EE6M67UZ4I4:/home/rishi/.local/share/containers/storage/overlay/l/B6GL45D333VZL42EE6M67UZ4I4/../diff1:/home/rishi/.local/share/containers/storage/overlay/l/I3MT3GFV2QVA2ZJ2PCN7UZRZTZ:/home/rishi/.local/share/containers/storage/overlay/l/2KCBRARJTGBTMSG6OQ6A6YW643,upperdir=/home/rishi/.local/share/containers/storage/overlay/9e84edd327b8c418cf3ef92f62edcaa54df1499d3529eb323fb75151a2590ca9/diff,workdir=/home/rishi/.local/share/containers/storage/overlay/9e84edd327b8c418cf3ef92f62edcaa54df1499d3529eb323fb75151a2590ca9/work,,userxattr,context="system_u:object_r:container_file_t:s0:c1022,c1023" 
DEBU[0000] Mounted container "ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99" at "/home/rishi/.local/share/containers/storage/overlay/9e84edd327b8c418cf3ef92f62edcaa54df1499d3529eb323fb75151a2590ca9/merged" 
DEBU[0000] Created root filesystem for container ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99 at /home/rishi/.local/share/containers/storage/overlay/9e84edd327b8c418cf3ef92f62edcaa54df1499d3529eb323fb75151a2590ca9/merged 
DEBU[0000] Not modifying container ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99 /etc/passwd 
DEBU[0000] Not modifying container ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99 /etc/group 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription 
DEBU[0000] Setting Cgroups for container ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99 to user.slice:libpod:ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99 
DEBU[0000] Set root propagation to "rslave"             
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Workdir "/" resolved to host path "/home/rishi/.local/share/containers/storage/overlay/9e84edd327b8c418cf3ef92f62edcaa54df1499d3529eb323fb75151a2590ca9/merged" 
DEBU[0000] Created OCI spec for container ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99 at /home/rishi/.local/share/containers/storage/overlay-containers/ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99 -u ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99 -r /usr/bin/crun -b /home/rishi/.local/share/containers/storage/overlay-containers/ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99/userdata -p /run/user/1000/containers/overlay-containers/ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99/userdata/pidfile -n fedora-toolbox-38 --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/rishi/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/rishi/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg boltdb --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99]"
INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99.scope 
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0000] Received: -1                                 
DEBU[0000] Cleaning up container ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99 
DEBU[0000] Network is already cleaned up, skipping...   
DEBU[0000] Unmounted container "ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99" 
Error: unable to start container ef7f514e176ea59f18603055c5cbd776be2a8fdb2dcd1fceb481da5c4bf51b99: crun: [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

setrlimit `RLIMIT_NPROC`: Operation not permitted: OCI permission denied
DEBU[0000] Shutting down engines                        

As far as I can make out, Toolbx containers created with the new version of the Podman stack can be started with it.

Steps to reproduce the issue

  1. Create a Toolbx container with toolbox create using crun-1.8.5-1.fc38.x86_64, podman-5:4.5.1-1.fc38.x86_64, etc. on Fedora 38 Workstation

  2. dnf update to crun-1.8.6-1.fc38.x86_64 and podman-5:4.6.0-1.fc38.x86_64

  3. Reboot

  4. Try podman start ...

Describe the results you received

podman start ... fails with:

...
setrlimit `RLIMIT_NPROC`: Operation not permitted: OCI permission denied

Describe the results you expected

podman start should succeed.

podman info output

host:
  arch: amd64
  buildahVersion: 1.31.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc38.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 99.17
    systemPercent: 0.26
    userPercent: 0.56
  cpus: 16
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: workstation
    version: "38"
  eventLogger: journald
  freeLocks: 2041
  hostname: topinka
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
  kernel: 6.4.10-200.fc38.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 20824010752
  memTotal: 33536196608
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.7.0-1.fc38.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.7.0
    package: netavark-1.7.0-1.fc38.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.7.0
  ociRuntime:
    name: crun
    package: crun-1.8.6-1.fc38.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.6
      commit: 73f759f4a39769f60990e7d225f561b4f4f06bcf
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20230625.g32660ce-1.fc38.x86_64
    version: |
      pasta 0^20230625.g32660ce-1.fc38.x86_64
      Copyright Red Hat
      GNU Affero GPL version 3 or later <https://www.gnu.org/licenses/agpl-3.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-12.fc38.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 6h 45m 6.00s (Approximately 0.25 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/rishi/.config/containers/storage.conf
  containerStore:
    number: 7
    paused: 0
    running: 0
    stopped: 7
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/rishi/.local/share/containers/storage
  graphRootAllocated: 1695606808576
  graphRootUsed: 285421572096
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 10
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/rishi/.local/share/containers/storage/volumes
version:
  APIVersion: 4.6.0
  Built: 1689942206
  BuiltTime: Fri Jul 21 14:23:26 2023
  GitCommit: ""
  GoVersion: go1.20.6
  Os: linux
  OsArch: linux/amd64
  Version: 4.6.0


### Podman in a container

No

### Privileged Or Rootless

Rootless

### Upstream Latest Release

No

### Additional environment details

Fedora 38 Workstation

### Additional information

Toolbx containers are interactive command-line environments that are meant to be long-lasting pet containers.  Therefore, it's important that containers created by older versions of the tools can be used with newer versions.

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 1
  • Comments: 35 (16 by maintainers)

Most upvoted comments

it looks like the only solution is to recreate the container.

Sadly, as I mentioned in the report, that’s a deal breaker for Toolbx containers. 😕

There’s a Fedora 39 Change to treat the Toolbx stack as a release blocker. I think one of the test criteria is about preexisting containers continuing to work.

Well, who brokered the deal with whom? Podman containers are certainly no long-term data storage. Toolbx uses (and advertises) them for a specific purpose, and changes quite a few of the standard podman options when it creates containers. Is there no way Toolbx could reset the ulimit for an existing container? After all, it would be “fair” to ask users to operate on Toolbx containers using Toolbx (rather then podman, if it fails), and Toolbx is able to recognize Toolbx containers as such (i.e. distinguish them from non-Toolbx containers).

Maybe, as a middle ground, podman could offer an option to migrate old containers (by resetting the ulimit setting or clearing it), and leave it for Toolbx (or the user) to decide when and if they use it?