podman: error creating libpod runtime: there might not be enough IDs available in the namespace

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I have RHEL servers in the 7.x range ( i think they are 7.4 or 7.5 ) that we currently run containers on with docker-compose. Went to a Red Hat conference and learned about Podman so want to use Podman in production to help us get away from the big fat deamons and not to run containers as root.

To that end i have created a centos 7.5 VM on my laptop and installed podman. But i cannot seem to get the uidmap functionality to work.

Im hoping that once we solve this uidmap bug im encountering that we can then take this and run it on RHEL 7.4 server.

On the RHEL 7.4 we can only operate as a regular user so we need to figure out rootless podman.

I understand that some changes to the OS are needed and we need adminstrative control to do this. Like the subuid and subgid and the kernal params to enable user namespaces. we can do that. but on a day to day basis including running the production containers we have to be able to run rootless podman and backup and recover the files as the same regular user ( not root )

In addition im not sure how to map an existing user on the container image for example mongod ( the mongodb user ) to the regular server user. but thats maybe getting ahead of ourselves.

Steps to reproduce the issue:

  1. clean Centos 7.5 VM
  2. logged into a regular user called “meta” (not root)
  3. sudo grubby --args=“namespace.unpriv_enable=1 user_namespace.enable=1” --update-kernel=“/boot/vmlinuz-3.10.0-957.5.1.el7.x86_64”
  4. sudo yum -y update && sudo yum install -y podman
  5. sudo echo ‘user.max_user_namespaces=15076’ >> /etc/sysctl.conf
  6. sudo echo ‘meta:100000:65536’ >> /etc/subuid
  7. sudo echo ‘meta:100000:65536’ >> /etc/subgid
  8. sudo reboot
  9. podman run -dt --uidmap 0:100000:500 ubuntu sleep 1000

Describe the results you received:

Error: error creating libpod runtime: there might not be enough IDs available in the namespace (requested 100000:100000 for /home/meta/.local/share/containers/storage/vfs): chown /home/meta/.local/share/containers/storage/vfs: invalid argument

Describe the results you expected:

I expected a pod / container which would be running and i could exec into it and create files inside the container as user root

upon exiting the container i expect those files to be owned by user “meta”

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:            1.3.2
RemoteAPI Version:  1
Go Version:         go1.10.3
OS/Arch:            linux/amd64

Output of podman info --debug:

WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids 
debug:
  compiler: gc
  git commit: ""
  go version: go1.10.3
  podman version: 1.3.2
host:
  BuildahVersion: 1.8.2
  Conmon:
    package: podman-1.3.2-1.git14fdcd0.el7.centos.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.14.0-dev, commit: e0b5a754190a3c24175944ff64fa7add6c8b0431-dirty'
  Distribution:
    distribution: '"centos"'
    version: "7"
  MemFree: 410226688
  MemTotal: 3973316608
  OCIRuntime:
    package: runc-1.0.0-59.dev.git2abd837.el7.centos.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.0'
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 4
  hostname: min0-kube0
  kernel: 3.10.0-957.21.3.el7.x86_64
  os: linux
  rootless: true
  uptime: 2h 25m 41.8s (Approximately 0.08 days)
registries:
  blocked: null
  insecure: null
  search:
  - registry.access.redhat.com
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.centos.org
store:
  ConfigFile: /home/meta/.config/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: vfs
  GraphOptions: null
  GraphRoot: /home/meta/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 0
  RunRoot: /tmp/1000
  VolumePath: /home/meta/.local/share/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.):

Centos 7.5 VM
sudo yum -y update && sudo yum install -y podman sudo echo ‘user.max_user_namespaces=15076’ >> /etc/sysctl.conf sudo echo ‘meta:100000:65536’ >> /etc/subuid sudo echo ‘meta:100000:65536’ >> /etc/subgid sudo reboot podman run -dt --uidmap 0:100000:500 ubuntu sleep 1000

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 1
  • Comments: 66 (43 by maintainers)

Commits related to this issue

Most upvoted comments

For the record and future googlers:

I had the same issue (there might not be enough IDs available in the namespace (requested 0:42 for /etc/shadow): lchown /etc/shadow: invalid argument). In my case I had /etc/subuid configured for my user (echo ${LOGNAME}:100000:65536 > /etc/subuid), but had failed to do the same for /etc/subgid. A warning pointing to /etc/subgid was shown on podman build. The problem persisted after that though, and doing podman unshare cat /proc/self/uid_map showed:

$ podman unshare cat /proc/self/uid_map
         0       1000          1

Unfortunately I couldn’t find what it should show though, so in a moment of desparation I also executed podman system migrate. That didn’t say anything, but afterwards things started to work!

$ podman unshare cat /proc/self/uid_map
         0       1000          1
         1     100000      65536

So, if you can:

  1. Please add a pointer to to this somewhere in the documentation, including the expected outputs
  2. Please make podman system migrate say something

I had a similar problem on openSUSE Tumbleweed. I had run the following commands:

zypper in podman echo “jon:100000:65536” >> /etc/subuid echo “jon:100000:65536” >> /etc/subgid

But I was still getting errors like this:

Error: error creating libpod runtime: there might not be enough IDs available in the namespace (requested 100000:100000 for /home/jon/.local/share/containers/storage/overlay/l): chown /home/jon/.local/share/containers/storage/overlay/l: invalid argument

And this:

Error processing tar file(exit status 1): there might not be enough IDs available in the namespace (requested 0:42 for /etc/shadow): lchown /etc/shadow: invalid argument

I could see that newuidmap and newgidmap are installed. The fix for me was this command:

podman system migrate

system info

Distributor ID: openSUSE Description: openSUSE Tumbleweed Release: 20190824

5.2.9-1-default #1 SMP Fri Aug 16 20:25:11 UTC 2019 (80c0ffe) x86_64 x86_64 x86_64 GNU/Linux

I had the same experience as @ankon on a fresh install on Arch Linux. I’d configured /etc/subuid and /etc/subgid appropriately, but it simply did not work until I ran podman system migrate. I had the same output for podman unshare cat /proc/self/uid_map, and after running the migrate command it magically started working.

I have podman working on my normal host, but today when I went to try it on a different host I saw the “not enough IDs available” error mentioned here. I must be forgetting a step that I ran on the other host, so if we could put together a pre-flight checklist that would be helpful. Off the top of my head here are the things I checked:

  • newuidmap/newgidmap exist on PATH (version 4.7)
  • runc exists on PATH (version 1.0.0-rc8)
  • slirp4netns exists on PATH (version 0.3.0)
  • conmon exists on PATH (version 1.14.4)
  • /proc/sys/user/max_user_namespaces is large enough (16k)
  • /etc/subuid and /etc/subgid have enough sub ids (64k, offset by a large number)
  • $XDG_RUNTIME_DIR exists
  • I ran podman system migrate to refresh the pause process

What am I forgetting? Is there something I can run to pinpoint the issue?

is it OK doing that at every package upgrade?

@mheon and @lsm5 will know.

you may consider adding a hint for the podman system migrate command for libpod runtime failures like those.

We definitely need to improve the error messages, especially for cases where we know how it can be resolved.

@juansuerogit you can use podman generate kube and podman play kube

getcap /usr/bin/newuidmap /usr/bin/newuidmap = cap_setuid+ep

If this is not set then this will not work.

I got similar errors, even with correctly configured /etc/subuid and /etc/subgid. Turns out, there’s a known issue/bug when your home directory is on NFS. Try something like:

mkdir /tmp/foo && podman --root=/tmp/foo --runroot=/tmp/foo run alpine uname -a

@llchan Then it is probably setuid.

@juansuerogit I had a nice demo today of podman running on a MAC using podman-remote. Should soon have packages available on Brew.

Okay, I’ve confirmed that the owner of newuidmap must be root for it to work. I’ll pass that along to our package deployment people to see if we can find a workaround.

@rhatdan somehow getcap returns nothing even on the host that’s working 🤔

Btw @juansuerogit sorry for sort of hijacking your thread, I thought it was related but maybe not 😃

Did a bit more snooping, looks like the podman log level is not set early enough, so the newuidmap debug output is getting swallowed. I built a binary with that log level bumped up and this is the error that causes the issue:

WARN[0000] error from newuidmap: newuidmap: open of uid_map failed: Permission denied