podman: Failed to remove image, image is in use by container

/kind bug

Description

Failed to remove image with error: image is in use by container but I have 0 containers running.

$ podman container list --all
CONTAINER ID  IMAGE   COMMAND  CREATED  STATUS  PORTS   NAMES
$ podman images
REPOSITORY                  TAG     IMAGE ID      CREATED        SIZE
<none>                      <none>  92038d6ed63a  9 minutes ago  1.25 GB
<none>                      <none>  2753208588ef  7 hours ago    1.25 GB
docker.io/library/postgres  latest  817f2d3d51ec  7 days ago     322 MB
<none>                      <none>  522d62996757  2 weeks ago    1.19 GB
docker.io/library/rust      latest  4050c19325e5  3 weeks ago    1.19 GB
docker.io/library/redis     latest  84c5f6e03bf0  3 weeks ago    108 MB
$ podman image rm 92038d6ed63a
Error: 1 error occurred:
	* image is in use by a container
$ podman image rm 2753208588ef
Error: 1 error occurred:
	* image is in use by a container
$ podman image rm 522d62996757
Error: 1 error occurred:
	* image is in use by a container

Steps to reproduce the issue:

Sorry, don’t have reproduce steps.

Describe the results you received:

Error: 1 error occurred:
	* image is in use by a container

Describe the results you expected:

Image successfully removed.

Additional information you deem important (e.g. issue happens only occasionally):

I don’t know what caused this to happened. My guess is because I canceled image build (ctrl+c) and then run podman system prune.

Output of podman version:

Version:      2.1.1
API Version:  2.0.0
Go Version:   go1.15.2
Built:        Thu Jan  1 07:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.16.1
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.20, commit: '
  cpus: 8
  distribution:
    distribution: ubuntu
    version: "20.04"
  eventLogger: journald
  hostname: █████
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.4.0-48-generic
  linkmode: dynamic
  memFree: 258670592
  memTotal: 16413179904
  ociRuntime:
    name: runc
    package: 'cri-o-runc: /usr/lib/cri-o-runc/sbin/runc'
    path: /usr/lib/cri-o-runc/sbin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 1.1.4
      commit: unknown
      libslirp: 4.3.1-git
      SLIRP_CONFIG_VERSION_MAX: 3
  swapFree: 4291031040
  swapTotal: 4294963200
  uptime: 4h 7m 7.97s (Approximately 0.17 days)
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /home/█████/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: vfs
  graphOptions: {}
  graphRoot: /home/█████/.local/share/containers/storage
  graphStatus: {}
  imageStore:
    number: 11
  runRoot: /run/user/1000/containers
  volumePath: /home/█████/.local/share/containers/storage/volumes
version:
  APIVersion: 2.0.0
  Built: 0
  BuiltTime: Thu Jan  1 07:00:00 1970
  GitCommit: ""
  GoVersion: go1.15.2
  OsArch: linux/amd64
  Version: 2.1.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman/unknown,now 2.1.1~1 amd64 [installed]
podman/unknown 2.1.1~1 arm64
podman/unknown 2.1.1~1 armhf
podman/unknown 2.1.1~1 s390x

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

Physical machine running Ubuntu 20.04. Podman binary from openSUSE Kubic.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 5
  • Comments: 24 (13 by maintainers)

Commits related to this issue

Most upvoted comments

Thanks for reaching out, @kafji!

Can you do a podman ps --all --storage? Maybe there’s a container running by Buildah?

I also used buildah rm --all to clean up all the left over storage listed in podman ps --all --storage.

I have the same issue, but don’t have buildah installed. Is there no podman build command to remove intermediate containers left over from a failed build (assuming --force-rm=false) ? Perhaps there should be?

Please open an new issue.

The rm --all and prune --all will ONLY remove podman containers, not buildah containers. You can remove the Buildah container if you specify the container id directlry.

I have the same issue, but don’t have buildah installed. Is there no podman build command to remove intermediate containers left over from a failed build (assuming --force-rm=false) ? Perhaps there should be?

I wonder why Buildah isn’t cleaning up intermediate containers in a failed build. @nalind @TomSweeneyRedHat do you know?

This bug seems to still exist in 3.3.0?

Interrupting a podman build results in leftovers that can only be removed individually via podman container rm --force. These should be removed by podman container prune, or possibly shouldn’t exist in the first place.

Sorry to be a necro, but… The podman rmi -f says it has deleted the image, but all it has managed to do is suppress the false “container in use” message. Doing a list of containers still shows the dangling container. I ran the podman ps --all --storage and see the dangling processes that were created a few days ago. I have to run buildah rm --all before I successfully run the podman rmi -f command.

So my question is, do we have a comparable podman command that does the same thing as buildah rm --all?

Is it even possible to safely prune these containers? We don’t really know anything about them so we can’t easily tell if they are in use or not…

On Wed, Jan 6, 2021 at 09:05 Daniel J Walsh notifications@github.com wrote:

podman rm will remove these containers. But we don’t have a flag

podman rm --external. Or podman containers prune --external

Would be the suggested commands to do this. I prefer the second.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/containers/podman/issues/7889#issuecomment-755316764, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB3AOCGRPCGLB65DZPEMLGLSYRUY5ANCNFSM4SBS655Q .

Thanks for coming back so quickly!

Should I use buildah to remove those?

Yes, could can do that. Doing a podman rmi--force will do that as well.

I am going to close the issue but feel free to continue the conversation.