podman: "potentially insufficient UIDs or GIDs available in user namespace" when pulling

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Pulling any image fails with potentially insufficient UIDs or GIDs available in user namespace. I have verified that subgid/subuid has been setup correctly.

Steps to reproduce the issue:

  1. podman pull docker.io/debian:testing-slim

Describe the results you received:

Trying to pull docker.io/library/debian:testing-slim...
Getting image source signatures
Copying blob 3331450fb84f done  
Error: writing blob: adding layer with blob "sha256:3331450fb84fde695e565405a554d5cf213a33826da197b29aabde08be012f8b": Error processing tar file(exit status 1): potentially insufficient UIDs or GIDs available in user namespace (requested 0:42 for /etc/gshadow): Check /etc/subuid and /etc/subgid: lchown /etc/gshadow: invalid argument

Additional information you deem important (e.g. issue happens only occasionally):

$ podman unshare cat /proc/self/uid_map
         0      60252          1
         1     200000      65536

$ podman unshare cat /proc/self/gid_map
         0      60252          1
         1     200000      65536

$ cat /etc/subuid
leorize:200000:65536

$ cat /etc/subgid
leorize:200000:65536

Output of podman version:

Version:      3.4.4
API Version:  3.4.4
Go Version:   go1.17.5
Built:        Fri Dec 24 03:50:42 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.23.1
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: app-containers/conmon-2.0.31
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.31, commit: v2.0.31'
  cpus: 12
  distribution:
    distribution: gentoo
    version: "2.8"
  eventLogger: journald
  hostname: leorize-lnx-workstation
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 60252
      size: 1
    - container_id: 1
      host_id: 200000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 60252
      size: 1
    - container_id: 1
      host_id: 200000
      size: 65536
  kernel: 5.15.11-gentoo
  linkmode: dynamic
  logDriver: journald
  memFree: 12275621888
  memTotal: 16773726208
  ociRuntime:
    name: crun
    package: app-containers/crun-1.3
    path: /usr/bin/crun
    version: |-
      crun version 1.3
      commit: 4f6c8e0583c679bfee6a899c05ac6b916022561b
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/user/60252/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_MKNOD,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: app-containers/slirp4netns-1.1.12
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 15032381440
  swapTotal: 15032381440
  uptime: 4m 8.45s
plugins:
  log:
  - k8s-file
  - none
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/leorize/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: sys-fs/fuse-overlayfs-1.8
      Version: |-
        fusermount3 version: 3.10.5
        fuse-overlayfs: version 1.8
        FUSE library version 3.10.5
        using FUSE kernel interface version 7.31
  graphRoot: /home/leorize/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 0
  runRoot: /run/user/60252/containers
  volumePath: /home/leorize/.local/share/containers/storage/volumes
version:
  APIVersion: 3.4.4
  Built: 1640339442
  BuiltTime: Fri Dec 24 03:50:42 2021
  GitCommit: ""
  GoVersion: go1.17.5
  OsArch: linux/amd64
  Version: 3.4.4

Package info (e.g. output of rpm -q podman or apt list podman):

app-containers/podman-3.4.4

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

Physical machine

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 3
  • Comments: 34 (7 by maintainers)

Most upvoted comments

The subuid and subgid fixed it for me.

$ sudo usermod --add-subuids 10000-75535 USERNAME
$ sudo usermod --add-subgids 10000-75535 USERNAME

https://docs.podman.io/en/latest/markdown/podman.1.html?highlight=usermod#rootless-mode

Just in case someone made the same stupid mistake I did:

After setting up rootless Podman, don’t run podman system migrate as root, but with your normal user.

For some reason I assumed that command had to be run in privileged mode but it’s the other way around.

Hi @alaviss thanks for creating the issues.

This is new i have never seen this error while pulling an image. Does it happen if you try to pull other images as well ? Like podman pull alpine ?

If you configured your range after the first error. Then you must also run podman system migrate.

Could you try doing podman system migrate and repull ?

If anyone needs it:

sudo usermod --add-subgids 10000-75535 USERNAME sudo usermod --add-subuids 10000-75535 USERNAME podman system migrate podman pull …

I solved by

  • Adding entries to /etc/subuid and /etc/subgid (I was using logging in via SSSD and not a local user, so the entries for my user were not automatically present
  • Ran podman system migrate

I was actually using ansible-builder, which I presume is actually using podman underneath.

For me it helped to remove the users container storage and let podman rebuild it:

rm -rf $HOME/.local/share/containers/storage

After this podman pull worked as expected.

Meant to edit my comment shortly after posting but didn’t hit submit apparently. Read this article and it makes a bit more sense. https://www.redhat.com/sysadmin/rootless-podman

As such I tried clearing out the lines for my user account in /etc/subuid and /etc/subgid and rerunning the command with a higher range sudo usermod --add-subuids 200000-265535 --add-subgids 200000-265535 $USER && podman system migrate

For some reason the command continued to put the original top end of the range. I am not sure why.

$ cat /etc/subuid
myusername:200000:65536

… but once I manually edited those files and set it the right way it worked. myusername:200000:265536

$ docker pull <private-registry-image>
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
Trying to pull <private-registry-image>:latest...
Getting image source signatures
Copying blob 159a7b5d1b30 skipped: already exists
Copying blob 96bd051f1942 skipped: already exists
Copying blob ba9ed3079fc2 done
Copying blob b17b2eb4d9bc done
Copying blob 8725baa6d367 done
Copying config 8f41cd279e done
Writing manifest to image destination
Storing signatures
8f41cd279ead2ebb350e83433eede62d204f63cc00a46743edc9db0456fd1d69

Thanks for the assistance.

I was also getting this error when trying to pull any image. In my case I could see there is some problem with the subuids:

$ podman unshare cat /proc/self/uid_map
         0       1002          1

After countless hours looking for a solution (including purging all the podman-related packages, restarting the server etc) I ran across

Could you try doing podman system migrate and repull ?

Which finally fixed the problem for me. Now I am able to pull images and

$ podman unshare cat /proc/self/uid_map
         0       1002          1
         1     231072      65536

I am not sure how a command designed “Migrate existing containers to a new podman version” helps in a situation when there are no existing containers (since there are no images) but I am happy it finally works.

For the record, this was on Ubuntu 20.04 and podman 3.4.2.

there is a good manual in arch wiki

Those instructions (and more specifically sudo usermod --add-subuids 100000-165535 --add-subgids 100000-165535 $USER && podman system migrate) fixed the issue for me for some images (such as alpine) but not for some internally built images we have. I noticed that earlier in the process the hello-world worked but alpine would not until I ran those commands.

Is there something that would cause this issue to only apply to some images?

Edit: I noticed the error message mentions the home directory of a different user (_jenkinsadmin). This is a service account and I’m running the pull as myself.

Error: writing blob: adding layer with blob "sha256:7abe8aa47f070b5ddcca3be48f96d9eb0b04924dd61e7a1607ab46a831a9f4b4": Error processing tar file(exit status 1): potentially insufficient UIDs or GIDs available in user namespace (requested 164188:164188 for /home/_jenkinsadmin): Check /etc/subuid and /etc/subgid if configured locally: lchown /home/_jenkinsadmin: invalid argument

I attempted to run the usermod commands for _jenkinsadmin but it did not help.

Also, I get these errors when I run the usermod commands, though the commands did see to do what they were intended to and modified the /etc/subuid and /etc/subgid files and did solve the issue for some images like alpine.

[sss_cache] [confdb_get_domain_internal] (0x0010): Unknown domain [default]
[sss_cache] [confdb_get_domains] (0x0010): Error (2 [No such file or directory]) retrieving domain [default], skipping!

Interestingly when I sudo to that user and try to run the podman migrate command I get this error: Error: invalid configuration: the specified mapping 100000:65536 in "/etc/subuid" includes the user UID

If an image includes more then 65k UIDs it will not work. Basically if you have an image that wants to place a UID of 70k on disk, The kernel will stop the user namespace from creating it, because the user namespace only has 65k UIDs available.

The subuid and subgid fixed it for me.

$ sudo usermod --add-subuids 10000-75535 USERNAME
$ sudo usermod --add-subgids 10000-75535 USERNAME

https://docs.podman.io/en/latest/markdown/podman.1.html?highlight=usermod#rootless-mode

there is a good manual in arch wiki

If anyone needs it:

sudo usermod --add-subgids 10000-75535 USERNAME sudo usermod --add-subuids 10000-75535 USERNAME podman system migrate podman pull …

@Sejoslaw Thanks for the commands, help me to solve the problem

I have a ‘.’ in my username which was not added in the /etc/subuid or /etc/subgid files, so it appeared as

joepublic:165536:65536

and not

joe.public:165536:65536

Once corrected in both files, the podman pull docker.io/debian:testing-slim worked like pottery.

what is in your /etc/subuid and /etc/subgid` files?

do you get any error if you try running as rootless these commands$ touch foo; podman unshare chown 0:42 foo?

Do you have newuidmap/newgidmap binaries installed ? Could you share output of following commands ?

  • getcap /usr/bin/newuidmap /usr/bin/newgidmap
  • id
  • podman unshare cat /proc/self/uid_map after podman system migrate
  • podman unshare cat /proc/self/gid_map after podman system migrate
  • also try reinstalling shadow-utils and run podman system migrate after reinstall.

Yep it works with sudo, but I would like to use toolbox, so sudo is a no-go for this…

Do you have any tips for diagnosing sub*id setups? The error messages are very unhelpful for this.