podman: "potentially insufficient UIDs or GIDs available in user namespace" when pulling
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Pulling any image fails with potentially insufficient UIDs or GIDs available in user namespace. I have verified that subgid/subuid has been setup correctly.
Steps to reproduce the issue:
podman pull docker.io/debian:testing-slim
Describe the results you received:
Trying to pull docker.io/library/debian:testing-slim...
Getting image source signatures
Copying blob 3331450fb84f done
Error: writing blob: adding layer with blob "sha256:3331450fb84fde695e565405a554d5cf213a33826da197b29aabde08be012f8b": Error processing tar file(exit status 1): potentially insufficient UIDs or GIDs available in user namespace (requested 0:42 for /etc/gshadow): Check /etc/subuid and /etc/subgid: lchown /etc/gshadow: invalid argument
Additional information you deem important (e.g. issue happens only occasionally):
$ podman unshare cat /proc/self/uid_map
0 60252 1
1 200000 65536
$ podman unshare cat /proc/self/gid_map
0 60252 1
1 200000 65536
$ cat /etc/subuid
leorize:200000:65536
$ cat /etc/subgid
leorize:200000:65536
Output of podman version:
Version: 3.4.4
API Version: 3.4.4
Go Version: go1.17.5
Built: Fri Dec 24 03:50:42 2021
OS/Arch: linux/amd64
Output of podman info --debug:
host:
arch: amd64
buildahVersion: 1.23.1
cgroupControllers:
- cpu
- io
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: app-containers/conmon-2.0.31
path: /usr/libexec/podman/conmon
version: 'conmon version 2.0.31, commit: v2.0.31'
cpus: 12
distribution:
distribution: gentoo
version: "2.8"
eventLogger: journald
hostname: leorize-lnx-workstation
idMappings:
gidmap:
- container_id: 0
host_id: 60252
size: 1
- container_id: 1
host_id: 200000
size: 65536
uidmap:
- container_id: 0
host_id: 60252
size: 1
- container_id: 1
host_id: 200000
size: 65536
kernel: 5.15.11-gentoo
linkmode: dynamic
logDriver: journald
memFree: 12275621888
memTotal: 16773726208
ociRuntime:
name: crun
package: app-containers/crun-1.3
path: /usr/bin/crun
version: |-
crun version 1.3
commit: 4f6c8e0583c679bfee6a899c05ac6b916022561b
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
os: linux
remoteSocket:
path: /run/user/60252/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_MKNOD,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: app-containers/slirp4netns-1.1.12
version: |-
slirp4netns version 1.1.12
commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
libslirp: 4.6.1
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.3
swapFree: 15032381440
swapTotal: 15032381440
uptime: 4m 8.45s
plugins:
log:
- k8s-file
- none
- journald
network:
- bridge
- macvlan
volume:
- local
registries: {}
store:
configFile: /home/leorize/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: sys-fs/fuse-overlayfs-1.8
Version: |-
fusermount3 version: 3.10.5
fuse-overlayfs: version 1.8
FUSE library version 3.10.5
using FUSE kernel interface version 7.31
graphRoot: /home/leorize/.local/share/containers/storage
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 0
runRoot: /run/user/60252/containers
volumePath: /home/leorize/.local/share/containers/storage/volumes
version:
APIVersion: 3.4.4
Built: 1640339442
BuiltTime: Fri Dec 24 03:50:42 2021
GitCommit: ""
GoVersion: go1.17.5
OsArch: linux/amd64
Version: 3.4.4
Package info (e.g. output of rpm -q podman or apt list podman):
app-containers/podman-3.4.4
Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
Physical machine
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 3
- Comments: 34 (7 by maintainers)
The subuid and subgid fixed it for me.
https://docs.podman.io/en/latest/markdown/podman.1.html?highlight=usermod#rootless-mode
Just in case someone made the same stupid mistake I did:
After setting up rootless Podman, don’t run
podman system migrateas root, but with your normal user.For some reason I assumed that command had to be run in privileged mode but it’s the other way around.
Hi @alaviss thanks for creating the issues.
This is new i have never seen this error while pulling an image. Does it happen if you try to pull other images as well ? Like
podman pull alpine?If you configured your range after the first error. Then you must also run
podman system migrate.Could you try doing
podman system migrateand repull ?If anyone needs it:
sudo usermod --add-subgids 10000-75535 USERNAME sudo usermod --add-subuids 10000-75535 USERNAME podman system migrate podman pull …
I solved by
/etc/subuidand/etc/subgid(I was using logging in via SSSD and not a local user, so the entries for my user were not automatically presentpodman system migrateI was actually using ansible-builder, which I presume is actually using podman underneath.
For me it helped to remove the users container storage and let
podmanrebuild it:After this
podman pullworked as expected.Meant to edit my comment shortly after posting but didn’t hit submit apparently. Read this article and it makes a bit more sense. https://www.redhat.com/sysadmin/rootless-podman
As such I tried clearing out the lines for my user account in /etc/subuid and /etc/subgid and rerunning the command with a higher range
sudo usermod --add-subuids 200000-265535 --add-subgids 200000-265535 $USER && podman system migrateFor some reason the command continued to put the original top end of the range. I am not sure why.
… but once I manually edited those files and set it the right way it worked.
myusername:200000:265536Thanks for the assistance.
I was also getting this error when trying to pull any image. In my case I could see there is some problem with the subuids:
After countless hours looking for a solution (including purging all the podman-related packages, restarting the server etc) I ran across
Which finally fixed the problem for me. Now I am able to pull images and
I am not sure how a command designed “Migrate existing containers to a new podman version” helps in a situation when there are no existing containers (since there are no images) but I am happy it finally works.
For the record, this was on Ubuntu 20.04 and podman 3.4.2.
Those instructions (and more specifically
sudo usermod --add-subuids 100000-165535 --add-subgids 100000-165535 $USER && podman system migrate) fixed the issue for me for some images (such asalpine) but not for some internally built images we have. I noticed that earlier in the process thehello-worldworked but alpine would not until I ran those commands.Is there something that would cause this issue to only apply to some images?
Edit: I noticed the error message mentions the home directory of a different user (_jenkinsadmin). This is a service account and I’m running the pull as myself.
I attempted to run the
usermodcommands for _jenkinsadmin but it did not help.Also, I get these errors when I run the usermod commands, though the commands did see to do what they were intended to and modified the
/etc/subuidand/etc/subgidfiles and did solve the issue for some images like alpine.Interestingly when I sudo to that user and try to run the podman migrate command I get this error:
Error: invalid configuration: the specified mapping 100000:65536 in "/etc/subuid" includes the user UIDIf an image includes more then 65k UIDs it will not work. Basically if you have an image that wants to place a UID of 70k on disk, The kernel will stop the user namespace from creating it, because the user namespace only has 65k UIDs available.
there is a good manual in arch wiki
@Sejoslaw Thanks for the commands, help me to solve the problem
I have a ‘.’ in my username which was not added in the
/etc/subuidor/etc/subgidfiles, so it appeared asjoepublic:165536:65536and not
joe.public:165536:65536Once corrected in both files, the
podman pull docker.io/debian:testing-slimworked like pottery.what is in your
/etc/subuidand /etc/subgid` files?do you get any error if you try running as rootless these commands
$ touch foo; podman unshare chown 0:42 foo?Do you have
newuidmap/newgidmapbinaries installed ? Could you share output of following commands ?getcap /usr/bin/newuidmap /usr/bin/newgidmapidpodman unshare cat /proc/self/uid_mapafterpodman system migratepodman unshare cat /proc/self/gid_mapafterpodman system migrateshadow-utilsand runpodman system migrateafter reinstall.Yep it works with
sudo, but I would like to usetoolbox, sosudois a no-go for this…Do you have any tips for diagnosing
sub*idsetups? The error messages are very unhelpful for this.