podman: Pull from public.ecr.aws is extremely slow and uses excessive disk space

Issue Description

When pulling the image public.ecr.aws/codebuild/amazonlinux2-x86_64-standard:5.0 from the public AWS container registry, a huge amount of disk space is used to download and unpack the layers (>100GB).

Pulling the same image with Docker (tested: 20.10.25) is at least one order of magnitude faster and uses disk space in the same order of magnitude than the reported size of the container image (~7GB used, image is 2.6GB).

Steps to reproduce the issue

podman pull public.ecr.aws/codebuild/amazonlinux2-x86_64-standard:5.0

Describe the results you received

After displaying a number of “Copying blob xxxxxx done” messages, local disk fills up, and pull aborts at some point due to lack of disk space.

Trying to clean up the incomplete download fails via podman rmi (no images shown) and via podman system prune (no effect). Only deleting all layers from ~/.local/share/containers/storage/vfs manually helps to reclaim space, but this will lead to problems when trying another pull later (presumably due to state caching).

Describe the results you expected

Container image is downloaded and unpacked quickly and does not use unnecessary disk space.

podman info output

host:
  arch: amd64
  buildahVersion: 1.29.0
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.6+ds1-1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.6, commit: unknown'
  cpuUtilization:
    idlePercent: 85.69
    systemPercent: 5.71
    userPercent: 8.6
  cpus: 2
  distribution:
    codename: trixie
    distribution: debian
    version: unknown
  eventLogger: journald
  hostname: debian
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.4.0-3-amd64
  linkmode: dynamic
  logDriver: journald
  memFree: 1246035968
  memTotal: 2062450688
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun_1.8.7-1_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.7
      commit: 53a9996ce82d1ee818349bdcc64797a1fa0433c4
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +WASM:wasmedge +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-1_amd64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 1022357504
  swapTotal: 1022357504
  uptime: 0h 2m 11.00s
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/user/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: vfs
  graphOptions: {}
  graphRoot: /home/user/.local/share/containers/storage
  graphRootAllocated: 19947929600
  graphRootUsed: 1540501504
  graphStatus: {}
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/user/.local/share/containers/storage/volumes
version:
  APIVersion: 4.4.0
  Built: 0
  BuiltTime: Wed Dec 31 19:00:00 1969
  GitCommit: ""
  GoVersion: go1.19.5
  Os: linux
  OsArch: linux/amd64
  Version: 4.4.0

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

No

Additional environment details

Tests were run on a local machine and on an EC2 instance.

Additional information

I can’t test with an upstream or privileged version at the moment, but will try to do so later.

About this issue

  • Original URL
  • State: closed
  • Created 10 months ago
  • Comments: 20 (11 by maintainers)

Most upvoted comments

It’s working very well for me so far.

I’ve found this guide to help with image export and import, but it’s not quite enough for a full storage migration. Is there any official documentation, blog post, etc. about migrating from vfs to overlay?

This might cover the most important things:

# save the filesystem of a container
podman export -o important-container.tar important_container
# save a volume
podman volume export -o important-volume.tar important_volume
# save all container images
podman save -o images.tar
# delete ~/.local/share/containers
# check that podman is using the overlay driver
podman info | grep graphDriverName
# re-import all container images
podman load -i images.tar
# re-import a saved container filesystem as a container image
podman import important-container.tar
# re-import a volume
podman volume import important_volume important-volume.tar

Interesting. Maybe that was 4.5 change then.

Thank you! I think the best is to no configure a driver by default at all. Podman is smart enough to use overlayfs is possible and would otherwise resort to less performant drivers.