podman: lchown error on podman pull

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description After logging in to our locally hosted repository and attempting to podman pull our latest image I received a couple of errors (one related to transport that was fixed by adding the docker:// to the call) the error below is still present (contact me for URL to image):

ERRO[0011] Error while applying layer: ApplyLayer exit status 1 stdout:  stderr: lchown /var/www/drupal/web/config/active: invalid argument 
Failed
(0x183b040,0xc00052b600)

Steps to reproduce the issue:

  1. podman login -p {SECRET KEY} -u unused {IMAGE REPO}

  2. podman pull docker://{IMAGE REPO}

  3. Error

Describe the results you received: Error instead of an image

Describe the results you expected: Image to be used

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 1.2.0-dev

Output of podman info --debug:

  MemFree: 511528960
  MemTotal: 5195935744
  OCIRuntime:
    package: Unknown
    path: /usr/local/sbin/runc
    version: |-
      runc version 1.0.0-rc6+dev
      commit: f79e211b1d5763d25fb8debda70a764ca86a0f23
      spec: 1.0.1-dev
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 4
  hostname: penguin
  kernel: 4.19.4-02480-gd44d301822f0
  os: linux
  rootless: true
  uptime: 136h 10m 42.4s (Approximately 5.67 days)
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - {IMAGE REPO}
store:
  ConfigFile: /home/ldary/.config/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: vfs
  GraphOptions: null
  GraphRoot: /home/ldary/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 0
  RunRoot: /run/user/1000
  VolumePath: /home/ldary/.local/share/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.): This is a Debian sandbox on a Pixelbook. We found that one error was removed by adding the docker:// that was also displayed when run without the transport. @vbatts also had me run this command findmnt -T /home/ldary/.local/share/containers/storage Output

/      /dev/vdb[/lxd/storage-pools/default/containers/penguin/rootfs] btrfs  rw,relatime,discard,space_cache,user_subvol_rm_allowed,subvolid=266,subvol=/lxd/storage-pools/default/containers/penguin/rootfs```

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 39 (22 by maintainers)

Most upvoted comments

I had this same issue (on ArchLinux). I think the cause was that I had run podman before creating /etc/sub{u,g}id. After killing all running podman-related process and a (probably over-zealous) sudo rm -rf ~/.{config,local/share}/containers /run/user/$(id -u)/{libpod,runc,vfs-*}, the issue disappeared.

Full procedure:

sudo touch /etc/sub{u,g}id
sudo usermod --add-subuids 10000-75535 $(whoami)
sudo usermod --add-subgids 10000-75535 $(whoami)
rm /run/user/$(id -u)/libpod/pause.pid

It seems that running podman system migrate instead of deleting the pid file should be more elegant?

I’m on openSUSE Leap 15.1 and confirms @jcaesar’s steps are effective. To be more specific I found killing existing podman (cache process?) and rm /run/user/$UID/libpod/pause.pid is enough for me. I guess it’ll force a reload of podman to /etc/sub?id.

I had this same issue (on ArchLinux). I think the cause was that I had run podman before creating /etc/sub{u,g}id. After killing all running podman-related process and a (probably over-zealous) sudo rm -rf ~/.{config,local/share}/containers /run/user/$(id -u)/{libpod,runc,vfs-*}, the issue disappeared.


works for me at ubuntu 18.04

@vsoch yes please!

That is an unrelated error. It should already be fixed upstream. We are cutting a 3.3.2 release either today or Monday that includes the fix.

Boum! That did the trick 😃

$ sudo apt-get install -y uidmap
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  snapd-login-service
Use 'sudo apt autoremove' to remove it.
The following NEW packages will be installed:
  uidmap
0 upgraded, 1 newly installed, 0 to remove and 5 not upgraded.
Need to get 64.8 kB of archives.
After this operation, 336 kB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 uidmap amd64 1:4.2-3.1ubuntu5.3 [64.8 kB]
Fetched 64.8 kB in 0s (204 kB/s)
Selecting previously unselected package uidmap.
(Reading database ... 455142 files and directories currently installed.)
Preparing to unpack .../uidmap_1%3a4.2-3.1ubuntu5.3_amd64.deb ...
Unpacking uidmap (1:4.2-3.1ubuntu5.3) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up uidmap (1:4.2-3.1ubuntu5.3) ...
$ podman pull docker://fedora:latest
Trying to pull docker://fedora:latest...Getting image source signatures
Copying blob 01eb078129a0 done
Copying config d09302f77c done
Writing manifest to image destination
Storing signatures
d09302f77cfcc3e867829d80ff47f9e7738ffef69730d54ec44341a9fb1d359b

Should we add this to here? (this is in install.md)

image