podman: Rootless podman error: could not get runtime: open /proc/31678/ns/user: no such file or directory

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Running any podman command (for example podman info or podman version) as non-root only gives me the following output:

% podman info
Error: could not get runtime: open /proc/31678/ns/user: no such file or directory

It works when running as root.

Steps to reproduce the issue:

  1. Run podman info

Describe the results you received:

Error: could not get runtime: open /proc/31678/ns/user: no such file or directory

Describe the results you expected: The actual command output…

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

% sudo podman version
Version:            1.8.0
RemoteAPI Version:  1
Go Version:         go1.13.7
OS/Arch:            linux/amd64

Output of podman info --debug:

UNOBTAINABLE

Package info (e.g. output of rpm -q podman or apt list podman):

% xbps-query podman
architecture: x86_64
filename-sha256: 514f725a15bc57ec717606103eaedd77f603d62c3b29007166ef9d235b308ac2
filename-size: 18MB
homepage: https://podman.io/
install-date: 2020-02-24 09:11 CET
installed_size: 54MB
license: Apache-2.0
maintainer: Cameron Nemo <camerontnorman@gmail.com>
metafile-sha256: 347e1471b2966a2350df5661216db7a1f60103544ebbe406d8e4a48dde169782
pkgver: podman-1.8.0_1
repository: https://alpha.de.repo.voidlinux.org/current
shlib-requires:
	libpthread.so.0
	libgpgme.so.11
	libassuan.so.0
	libgpg-error.so.0
	libseccomp.so.2
	librt.so.1
	libdevmapper.so.1.02
	libc.so.6
short_desc: Simple management tool for containers and images
source-revisions: podman:4351b38206
state: installed

Additional environment details (AWS, VirtualBox, physical, etc.): Physical box running Void Linux.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 67 (30 by maintainers)

Commits related to this issue

Most upvoted comments

just fyi, I hit the same on RHEL 8.2 using podman-1.6.4-11.module+el8.2.0+6368+cf16aa14.x86_64 where the system got unexpected rebooted. Cleaning /tmp/run-1001/libpod/pause.pid where /tmp is not on a tmpfs fixed it

I resolved this issue with podman 1.6.4 on CentoOS 8.1.1911 by adding the following to the end of “/usr/lib/tmpfiles.d/tmp.conf”

R! /tmp/run-992

This way the directory is removed on reboot.

992 is the userid

Ok forget the workaround of my last https://github.com/containers/libpod/issues/5306#issuecomment-616411969 .

I figured out that this error occurs, if I have a running rootless container and reboot the machine. It fails with the standard storage.conf under .config/containers/ as well as a modified configuration. It works for me, if I delete /var/tmp/run-<uid>. Afterwards a podman command creates this dir again and all podman commands work normally. I tested it by running podman run -dt --rm -P nginx with a default storage.conf as well as with a modified one. If I only have the image, the container is not running and I reboot, everything works. It occurs only with running containers. If I run the nginx container as root (with a modified storage.conf under /etc/containers/) and reboot, I don’t have any problem.

Another point I don’t understand is, I changed runroot dir in storage.conf (default is /var/tmp/run-<uid>), but podman still uses this dir to look for usernamespace which ends with the error of this issue.

confirmed when tmp_dir is not on tmpfs (tmp.mount systemd unit is masked)

$ podman version
Version:            1.6.4
RemoteAPI Version:  1
Go Version:         go1.13.4
OS/Arch:            linux/amd64

$ cat /etc/redhat-release 
CentOS Linux release 8.2.2004 (Core) 

$ systemctl status tmp.mount
● tmp.mount
   Loaded: masked (Reason: Unit tmp.mount is masked.)
   Active: inactive (dead)

rm -rf /tmp/run-$(id -u)/ resolve issue without reboot

@faulesocke Are you on Podman 1.8.1? That should have the fix.

If not, can you try removing /run/user/$UID/libpod/pause.pid and see if that fixes it (replace $UID with the UID of the user running Podman)?

Please search conmon.pid in your /runroot/overlay-containers/… folder and delete the file. Delete all content in the /tmp/runXXXX folder. Stop Start podman. No reboot is needed. Please give feedback if this solution works for you.

Well if the content is always placed on /tmp, then I believe we can setup a tmpfiles.d to look for the directory podman anywhere under /tmp and remove content. We just need to have a path that the tmpfiles.d file would look for.

@rhatdan: Just to summarize the impression from this conversation, the podman stores runtime data somewhere and “rely on the … directory being cleaned on a reboot”. But, if user run containers under unprivileged user (via sudo/su), the crucial $XDG_RUNTIME_DIR pointing to /run is not defined. Than it most likely fallbacks to /tmp, which is by default in e.g. CentOS 8 cloud images on persistent rootfs (not tmpfs) and not cleaned by default on reboot. Moreover, even the podman 2.0+ (referring to current version 2.0.5 in CentOS 8 Stream) still doesn’t deal with this situation and user is left in broken state after reboot. The fact that he needs to clean something in /tmp manually, or ensure /tmp is mounted on tmpfs, or explicitly export $XDG_RUNTIME_DIR

… results in a terrible experience how to run rootless containers and this approach is (sadly) quickly abandoned in favour of relatively less problematic Docker UX. I don’t think you CAN “rely” on a thing, which is obivously not matched in so many cases.

Can you open a fresh issue? This one is getting rather long, and this definitely should be fixed

That is definitely not a situation that we consider safe - Podman’s rundir absolutely needs to be on a tmpfs for safety across reboots.

Have this issue too after each reboot of the host system. Only a podman system migrate resolves this temporary (until next boot). Can’t downgrade/upgrade the package, as this is a new void linux installation and only podman 1.8.0 packages is available.