podman: podman images very slow

/kind bug

Description The bash_completion for podman run is very slow. Running with set -x, it seems the longest operation is listing all images:

[...]
+++ podman images
+++ awk 'NR>1 && $1 != "<none>" { print $1; print $1":"$2 }'
+++ grep --color=auto -v '<none>$'

I have ~460 images locally at this time (according to podman images | wc -l, so quite a lot of <none> and versions sharing underlying layers), and podman images takes takes 20 seconds:

$ time podman images
[...]
real    0m20.679s
user    0m14.979s
sys     0m9.875s

Note that this happens even when I already supplied the image, eg podman run busybox -- <tab> or when an image is not the correct/relevant completion, eg podman run -v <tab>.

Since the terminal just appears to freeze unless you know what is happening, this is pretty disruptive. I’m not sure if 460 images should be considered excessive? Otherwise it might make sense to either not complete with the images output, or investigate if podman images should be much faster?

Steps to reproduce the issue:

  1. podman run <tab>

Describe the results you received: Waiting.

Describe the results you expected: Less waiting.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:            1.9.1
RemoteAPI Version:  1
Go Version:         go1.14.2
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  gitCommit: ""
  goVersion: go1.14.2
  podmanVersion: 1.9.1
host:
  arch: amd64
  buildahVersion: 1.14.8
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.15-1.fc32.x86_64
    path: /usr/libexec/crio/conmon
    version: 'conmon version 2.0.15, commit: 33da5ef83bf2abc7965fc37980a49d02fdb71826'
  cpus: 8
  distribution:
    distribution: fedora
    version: "32"
  eventLogger: file
  hostname: capelt
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.6.11-300.fc32.x86_64
  memFree: 7246368768
  memTotal: 33539756032
  ociRuntime:
    name: crun
    package: crun-0.13-2.fc32.x86_64
    path: /bin/crun
    version: |-
      crun version 0.13
      commit: e79e4de4ac16da0ce48777afb72c6241de870525
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  rootless: true
  slirp4netns:
    executable: /bin/slirp4netns
    package: slirp4netns-1.0.0-1.fc32.x86_64
    version: |-
      slirp4netns version 1.0.0
      commit: a3be729152a33e692cd28b52f664defbf2e7810a
      libslirp: 4.2.0
  swapFree: 0
  swapTotal: 0
  uptime: 169h 46m 39.2s (Approximately 7.04 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/cape/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /bin/fuse-overlayfs
      Package: fuse-overlayfs-1.0.0-1.fc32.x86_64
      Version: |-
        fusermount3 version: 3.9.1
        fuse-overlayfs: version 1.0.0
        FUSE library version 3.9.1
        using FUSE kernel interface version 7.31
  graphRoot: /home/cape/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 1076
  runRoot: /tmp/1000
  volumePath: /home/cape/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.9.1-1.fc32.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.): Physical

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 36 (23 by maintainers)

Commits related to this issue

Most upvoted comments

Okay, made a run between 127 images (what was left after the prune) up to 450. I used this script:

while podman build -t bloat-test . >/dev/null; do
  touch config/empty.yaml
  /usr/bin/time -f %e/%U/%S podman images | wc -l
done

With this Dockerfile:

FROM python:alpine
ARG KUBECTL_VERSION=1.16.5
ARG HELM_VERSION=3.2.1
ARG KUBEVAL_VERSION=0.15.0
ARG CONFTEST_VERSION=0.18.2
ARG YAMLLINT_VERSION=1.23.0
WORKDIR /work

RUN apk add bash coreutils && \
	a long string of wget | tar:s

COPY config /config
COPY lint.sh /

ENTRYPOINT ["/lint.sh"]

I can’t share the actual files we copy over, but it is ~50k of plain text in /config. The layers above are ~250M, but the only layers which are rebuilt are the bottom three, due to touching a file in /config. I don’t believe the content of the image makes any difference, but here for completeness. The size of ~/.local/share/containers did not increase noticably from this, and is still at 32G.

From first to last measurement, wall time to run podman images increased from 0.8s to 23.3s. Plotted, there is a slight but noticable super-linearity to the trend. image

Would probably be a good idea to reproduce this with a simpler image, and make a longer run. This took ~1h40m to run on my laptop, so kicking it off from a clean slate on some machine somewhere and letting it run for longer might show a clearer trend.

Here is a gist with the raw data.

I’m sorry but the problem persists here:

$ time podman images | wc -l
48

real	0m13,713s
user	0m5,608s
sys	0m9,363s
podman version
Version:      3.4.7
API Version:  3.4.7
Go Version:   go1.16.15
Built:        Thu Apr 21 15:14:26 2022
OS/Arch:      linux/amd64

And this is very uncomfortable with bash completion. Have I missed something ?