podman: error creating libpod runtime: there might not be enough IDs available in the namespace
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I have RHEL servers in the 7.x range ( i think they are 7.4 or 7.5 ) that we currently run containers on with docker-compose. Went to a Red Hat conference and learned about Podman so want to use Podman in production to help us get away from the big fat deamons and not to run containers as root.
To that end i have created a centos 7.5 VM on my laptop and installed podman. But i cannot seem to get the uidmap functionality to work.
Im hoping that once we solve this uidmap bug im encountering that we can then take this and run it on RHEL 7.4 server.
On the RHEL 7.4 we can only operate as a regular user so we need to figure out rootless podman.
I understand that some changes to the OS are needed and we need adminstrative control to do this. Like the subuid and subgid and the kernal params to enable user namespaces. we can do that. but on a day to day basis including running the production containers we have to be able to run rootless podman and backup and recover the files as the same regular user ( not root )
In addition im not sure how to map an existing user on the container image for example mongod ( the mongodb user ) to the regular server user. but thats maybe getting ahead of ourselves.
Steps to reproduce the issue:
- clean Centos 7.5 VM
- logged into a regular user called “meta” (not root)
- sudo grubby --args=“namespace.unpriv_enable=1 user_namespace.enable=1” --update-kernel=“/boot/vmlinuz-3.10.0-957.5.1.el7.x86_64”
- sudo yum -y update && sudo yum install -y podman
- sudo echo ‘user.max_user_namespaces=15076’ >> /etc/sysctl.conf
- sudo echo ‘meta:100000:65536’ >> /etc/subuid
- sudo echo ‘meta:100000:65536’ >> /etc/subgid
- sudo reboot
- podman run -dt --uidmap 0:100000:500 ubuntu sleep 1000
Describe the results you received:
Error: error creating libpod runtime: there might not be enough IDs available in the namespace (requested 100000:100000 for /home/meta/.local/share/containers/storage/vfs): chown /home/meta/.local/share/containers/storage/vfs: invalid argument
Describe the results you expected:
I expected a pod / container which would be running and i could exec into it and create files inside the container as user root
upon exiting the container i expect those files to be owned by user “meta”
Additional information you deem important (e.g. issue happens only occasionally):
Output of podman version:
Version: 1.3.2
RemoteAPI Version: 1
Go Version: go1.10.3
OS/Arch: linux/amd64
Output of podman info --debug:
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding subids
debug:
compiler: gc
git commit: ""
go version: go1.10.3
podman version: 1.3.2
host:
BuildahVersion: 1.8.2
Conmon:
package: podman-1.3.2-1.git14fdcd0.el7.centos.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 1.14.0-dev, commit: e0b5a754190a3c24175944ff64fa7add6c8b0431-dirty'
Distribution:
distribution: '"centos"'
version: "7"
MemFree: 410226688
MemTotal: 3973316608
OCIRuntime:
package: runc-1.0.0-59.dev.git2abd837.el7.centos.x86_64
path: /usr/bin/runc
version: 'runc version spec: 1.0.0'
SwapFree: 0
SwapTotal: 0
arch: amd64
cpus: 4
hostname: min0-kube0
kernel: 3.10.0-957.21.3.el7.x86_64
os: linux
rootless: true
uptime: 2h 25m 41.8s (Approximately 0.08 days)
registries:
blocked: null
insecure: null
search:
- registry.access.redhat.com
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.centos.org
store:
ConfigFile: /home/meta/.config/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: vfs
GraphOptions: null
GraphRoot: /home/meta/.local/share/containers/storage
GraphStatus: {}
ImageStore:
number: 0
RunRoot: /tmp/1000
VolumePath: /home/meta/.local/share/containers/storage/volumes
Additional environment details (AWS, VirtualBox, physical, etc.):
Centos 7.5 VM
sudo yum -y update && sudo yum install -y podman
sudo echo ‘user.max_user_namespaces=15076’ >> /etc/sysctl.conf
sudo echo ‘meta:100000:65536’ >> /etc/subuid
sudo echo ‘meta:100000:65536’ >> /etc/subgid
sudo reboot
podman run -dt --uidmap 0:100000:500 ubuntu sleep 1000
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 1
- Comments: 66 (43 by maintainers)
Commits related to this issue
- troubleshooting.md: added #18 not enough ids https://github.com/containers/libpod/issues/3421#issuecomment-544455837 — committed to nitrocode/libpod by nitrocode 5 years ago
- troubleshooting.md: added #18 not enough ids https://github.com/containers/libpod/issues/3421#issuecomment-544455837 Signed-off-by: nitrocode <nitrocode@users.noreply.github.com> — committed to nitrocode/libpod by nitrocode 5 years ago
For the record and future googlers:
I had the same issue (
there might not be enough IDs available in the namespace (requested 0:42 for /etc/shadow): lchown /etc/shadow: invalid argument). In my case I had/etc/subuidconfigured for my user (echo ${LOGNAME}:100000:65536 > /etc/subuid), but had failed to do the same for/etc/subgid. A warning pointing to/etc/subgidwas shown onpodman build. The problem persisted after that though, and doingpodman unshare cat /proc/self/uid_mapshowed:Unfortunately I couldn’t find what it should show though, so in a moment of desparation I also executed
podman system migrate. That didn’t say anything, but afterwards things started to work!So, if you can:
podman system migratesay somethingI had a similar problem on openSUSE Tumbleweed. I had run the following commands:
But I was still getting errors like this:
And this:
I could see that newuidmap and newgidmap are installed. The fix for me was this command:
podman system migrateI had the same experience as @ankon on a fresh install on Arch Linux. I’d configured
/etc/subuidand/etc/subgidappropriately, but it simply did not work until I ranpodman system migrate. I had the same output forpodman unshare cat /proc/self/uid_map, and after running the migrate command it magically started working.I have podman working on my normal host, but today when I went to try it on a different host I saw the “not enough IDs available” error mentioned here. I must be forgetting a step that I ran on the other host, so if we could put together a pre-flight checklist that would be helpful. Off the top of my head here are the things I checked:
podman system migrateto refresh the pause processWhat am I forgetting? Is there something I can run to pinpoint the issue?
@mheon and @lsm5 will know.
We definitely need to improve the error messages, especially for cases where we know how it can be resolved.
@juansuerogit you can use
podman generate kubeandpodman play kubegetcap /usr/bin/newuidmap /usr/bin/newuidmap = cap_setuid+ep
If this is not set then this will not work.
I got similar errors, even with correctly configured /etc/subuid and /etc/subgid. Turns out, there’s a known issue/bug when your home directory is on NFS. Try something like:
mkdir /tmp/foo && podman --root=/tmp/foo --runroot=/tmp/foo run alpine uname -a@llchan Then it is probably setuid.
@juansuerogit I had a nice demo today of podman running on a MAC using podman-remote. Should soon have packages available on Brew.
Okay, I’ve confirmed that the owner of newuidmap must be root for it to work. I’ll pass that along to our package deployment people to see if we can find a workaround.
@rhatdan somehow getcap returns nothing even on the host that’s working 🤔
Btw @juansuerogit sorry for sort of hijacking your thread, I thought it was related but maybe not 😃
Did a bit more snooping, looks like the podman log level is not set early enough, so the newuidmap debug output is getting swallowed. I built a binary with that log level bumped up and this is the error that causes the issue: