podman: podman run says the container name is already in use but podman ps --all does not show any container with that name

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I have this bug after a power outage.

podman run --name nextcloud fedora
error creating container storage: the container name "nextcloud" is already in use by "31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1". You have to remove that container to be able to reuse that name.: that name is already in use

podman ps --all | grep nextcloud has not output

Steps to reproduce the issue:

Dunno how to reproduce it, it appeared after a power outage and it’s abrupt shutdown

Output of podman version:

host:
  BuildahVersion: 1.6-dev
  Conmon:
    package: podman-1.0.0-1.git82e8011.fc29.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.12.0-dev, commit: 49780a1cf10d572edc4e1ea3b8a8429ce391d47d'
  Distribution:
    distribution: fedora
    version: "29"
  MemFree: 374931456
  MemTotal: 8241008640
  OCIRuntime:
    package: runc-1.0.0-67.dev.git12f6a99.fc29.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc6+dev
      commit: d164d9b08bf7fc96a931403507dd16bced11b865
      spec: 1.0.1-dev
  SwapFree: 8262250496
  SwapTotal: 8380215296
  arch: amd64
  cpus: 4
  hostname: asheville.intranet.zokormazo.info
  kernel: 4.20.6-200.fc29.x86_64
  os: linux
  rootless: false
  uptime: 12h 27m 2.91s (Approximately 0.50 days)
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 6
  GraphDriverName: overlay
  GraphOptions: null
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
  ImageStore:
    number: 8
  RunRoot: /var/run/containers/storage

Output of podman info --debug:

debug:
  compiler: gc
  git commit: '"49780a1cf10d572edc4e1ea3b8a8429ce391d47d"'
  go version: go1.11.4
  podman version: 1.0.0
host:
  BuildahVersion: 1.6-dev
  Conmon:
    package: podman-1.0.0-1.git82e8011.fc29.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.12.0-dev, commit: 49780a1cf10d572edc4e1ea3b8a8429ce391d47d'
  Distribution:
    distribution: fedora
    version: "29"
  MemFree: 374919168
  MemTotal: 8241008640
  OCIRuntime:
    package: runc-1.0.0-67.dev.git12f6a99.fc29.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc6+dev
      commit: d164d9b08bf7fc96a931403507dd16bced11b865
      spec: 1.0.1-dev
  SwapFree: 8262250496
  SwapTotal: 8380215296
  arch: amd64
  cpus: 4
  hostname: asheville.intranet.zokormazo.info
  kernel: 4.20.6-200.fc29.x86_64
  os: linux
  rootless: false
  uptime: 12h 27m 32.11s (Approximately 0.50 days)
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 6
  GraphDriverName: overlay
  GraphOptions: null
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
  ImageStore:
    number: 8
  RunRoot: /var/run/containers/storage

Additional environment details (AWS, VirtualBox, physical, etc.): Bare metal f29

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 5
  • Comments: 31 (14 by maintainers)

Most upvoted comments

Try a ‘podman rm --storage’.

On Wed, Jul 10, 2019, 07:48 Ed Santiago notifications@github.com wrote:

I saw this also yesterday; podman-1.4.4-3.fc30 as nonroot; but cannot reproduce it. Virt is still up, with one “container name already in use” stuck. Can provide login access on request.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/containers/libpod/issues/2553?email_source=notifications&email_token=AB3AOCEF7VN5HYGYWXZ3QDLP6XEBDA5CNFSM4G4CJBL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZTGS3Q#issuecomment-510028142, or mute the thread https://github.com/notifications/unsubscribe-auth/AB3AOCB3UJ5DUPXOOMPND73P6XEBDANCNFSM4G4CJBLQ .

I have the same issue on Fedora 31 with podman-1.4.4-1.fc30.x86_64. There are no references of this container in containers.json so not sure how to clean it up manually.

That did it. Since this seems to be a common problem, should the podman-run message perhaps be amended to include this hint?

Error: error creating container storage: the container name "foo" is already in use by "00fbb9ad28dd0cb32811e87fe789cbed612206a97395420365e3238e9afd2e1e". You have to remove that container to be able to reuse that name.: that name is already in use (hint: if "podman rm foo" doesn't clear things up, try "podman rm --storage foo")

Oh, you’re on 1.0 - damn. We added that to rm -f in 1.1

If you have Buildah installed, it should be able to remove the container in the meantime - it operates at a lower level than us, and as such can see these containers.

@Zokormazo I’m no podman dev, but maybe try adding sudo to your command: sudo podman ps --all

I had to sudo podman run -p 5432:5432 ... because podman 1.0 needed elevated permission for port bindings (fixed in v1.1). Got confused afterwards with podman ps --all output being empty. But running sudo podman ps --all did the trick.

@BBBosp if you have removed all containers, you could remove the bolt_state.db

rm /home/dwalsh/.local/share/containers/storage/libpod/bolt_state.db

This will remove the database but leave your images, The next run of podman will recreate the database.

podman rm --storage <id> doesn’t seem to work for me with the zfs driver though:

# podman ps -a
CONTAINER ID  IMAGE                            COMMAND  CREATED         STATUS                       PORTS  NAMES
6b265ecd8ed3  docker.io/library/alpine:latest  sh       21 minutes ago  Exited (0) 21 minutes ago           suspicious_banzai
45de4c6bf843  docker.io/library/alpine:latest  sh       27 minutes ago  Exited (130) 24 minutes ago         optimistic_cerf
96aaa668db27  docker.io/library/alpine:latest  sh       39 minutes ago  Exited (0) 37 minutes ago           magical_hopper
c95e5272d83f  docker.io/library/alpine:latest  sh       41 minutes ago  Exited (0) 41 minutes ago           vigorous_khorana
9645695533c7  docker.io/library/alpine:latest  sh       42 minutes ago  Exited (130) 41 minutes ago         crazy_mccarthy
15684becc00a  docker.io/library/alpine:latest  bash     42 minutes ago  Created                             dreamy_kalam
 # podman run --rm --name=prometheus --net=bridge --network container-net -v "/var/container-data/prometheus/data:/prometheus" -v "/var/container-data/prometheus/conf/prometheus.yml:/etc/prometheus/prometheus.yml" -p "10.10.0.1:9090:9090" prom/prometheus
Error: error creating container storage: the container name "prometheus" is already in use by "dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5". You have to remove that container to be able to reuse that name.: that name is already in use
# podman rm -f prometheus
Error: no container with name or ID prometheus found: no such container

# podman rm -f dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5
Error: no container with name or ID dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5 found: no such container

# podman rm --storage dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5
dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5
Error: error removing storage for container "dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5": exit status 1: "/usr/sbin/zfs zfs destroy -r tank/containers/60024e34b354c0274536c32b941f7826742c0579d541de3b5ab30323f2e4c0af" => cannot open 'tank/containers/60024e34b354c0274536c32b941f7826742c0579d541de3b5ab30323f2e4c0af': dataset does not exist

The only issue with recommending it unconditionally is that it will quite happily destroy containers from Buildah/CRI-O as well.

The overall recommendation works something like this: Check CRI-O and Buildah to see if it’s a container running there. If it is, we recommend deleting them through crictl and buildah. If it’s not there, it’s probably an orphan container - hit it with --storage.

That container is probably a relic from a partially failed container delete, or was made by Buildah or CRI-O. You should be able to force it’s removal, even if we don’t see it, with Podman rm -f

On Wed, Mar 6, 2019, 05:48 Julen Landa Alustiza notifications@github.com wrote:

Some more info:

My containers.json on /var/lib/containers/storage/overlay-containers has a reference to this container:

{ “id”: “31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1”, “names”: [ “nextcloud” ], “image”: “dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4”, “layer”: “5078a913609383e102745769c42090cb62c878780002adf133dfadf3ca9b0e55”, “metadata”: “{"image-name":"docker.io/library/nextcloud:14.0.3\ http://docker.io/library/nextcloud:14.0.3\”,"image-id":"dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4","name":"nextcloud","created-at":1544648833,"mountlabel":"system_u:object_r:container_file_t:s0:c151,c959"}", “created”: “2018-12-12T21:07:13.804209323Z” }

But podman doesn’t know about it. podman prune doesn not help neither

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/containers/libpod/issues/2553#issuecomment-470061054, or mute the thread https://github.com/notifications/unsubscribe-auth/AHYHCCJTpVbzZpK2bciVGYAfvs9TIq0Eks5vT5zzgaJpZM4bgkhX .

Some more info:

My containers.json on /var/lib/containers/storage/overlay-containers has a reference to this container:

  {
    "id": "31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1",
    "names": [
      "nextcloud"
    ],
    "image": "dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4",
    "layer": "5078a913609383e102745769c42090cb62c878780002adf133dfadf3ca9b0e55",
    "metadata": "{\"image-name\":\"docker.io/library/nextcloud:14.0.3\",\"image-id\":\"dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4\",\"name\":\"nextcloud\",\"created-at\":1544648833,\"mountlabel\":\"system_u:object_r:container_file_t:s0:c151,c959\"}",
    "created": "2018-12-12T21:07:13.804209323Z"
  }

But podman doesn’t know about it. podman prune doesn not help neither