podman: Unable to restart Toolbox containers stopped by podman (must reboot)

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I am unable to re enter toolbox containers which were stopped using podman stop <container>

In order to re enter the container with toolbox enter <container> (or by podman start <container>), I need to reboot the system, after which I can re enter the container and its state is mantained

Steps to reproduce the issue:

  1. toolbox create

  2. toolbox enter

  3. podman stop fedora-toolbox-31

  4. toolbox enter (errors out, see below)

Describe the results you received:

toolbox -v enter error:

Error: unable to start container "fedora-toolbox-31": container '7dbef4079c4e61754d26135c9fab554b9130bf4e1bc7a2d484aace38a7468eca' already exists: OCI runtime error
toolbox: failed to start container fedora-toolbox-31

journalctl log sniplet:

Oct 04 18:35:26 rauros.figura.io conmon[12671]: conmon 7dbef4079c4e61754d26 <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Oct 04 18:35:26 rauros.figura.io conmon[12672]: conmon 7dbef4079c4e61754d26 <ninfo>: attach sock path: /run/user/1000/libpod/tmp/socket/7dbef4079c4e61754d26135c9fab554b9130bf4e1bc7a2d484aace38a7468eca/attach
Oct 04 18:35:26 rauros.figura.io conmon[12672]: conmon 7dbef4079c4e61754d26 <ninfo>: addr{sun_family=AF_UNIX, sun_path=/run/user/1000/libpod/tmp/socket/7dbef4079c4e61754d26135c9fab554b9130bf4e1bc7a2d484aace38a7468eca/attach}
Oct 04 18:35:26 rauros.figura.io conmon[12672]: conmon 7dbef4079c4e61754d26 <ninfo>: ctl fifo path: /var/home/returntrip/.local/share/containers/storage/overlay-containers/7dbef4079c4e61754d26135c9fab554b9130bf4e1bc7a2d484aace38a7468eca/userdata/ctl
Oct 04 18:35:26 rauros.figura.io conmon[12672]: conmon 7dbef4079c4e61754d26 <ninfo>: terminal_ctrl_fd: 12
Oct 04 18:35:26 rauros.figura.io conmon[12672]: conmon 7dbef4079c4e61754d26 <error>: Failed to create container: exit status 1
Oct 04 18:35:27 rauros.figura.io podman[12675]: 2019-10-04 18:35:27.029460577 +0200 CEST m=+0.050979420 container cleanup 7dbef4079c4e61754d26135c9fab554b9130bf4e1bc7a2d484aace38a7468eca (image=registry.fedoraproject.org/f31/fedora-toolbox:31, name=fedora-toolbox-31)

Describe the results you expected: I should be able to access the container without rebooting

Additional information you deem important (e.g. issue happens only occasionally): I have noticed this issue about 15 days ago while testing this: https://github.com/containers/libpod/issues?q=is%3Aissue+is%3Aclosed

Cleared .local/share/containers before testing

Output of software versions`:

podman-1.6.1-2.fc31.x86_64
toolbox-0.0.15-1.fc31.noarch
conmon-2.0.1-1.fc31.x86_64
fuse-overlayfs-0.6.4-2.fc31.x86_64
crun-0.10.1-1.fc31.x86_64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.13
  podman version: 1.6.1
host:
  BuildahVersion: 1.11.2
  CgroupVersion: v2
  Conmon:
    package: conmon-2.0.1-1.fc31.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.1, commit: 5e0eadedda9508810235ab878174dca1183f4013'
  Distribution:
    distribution: fedora
    version: "31"
  MemFree: 9118236672
  MemTotal: 16778067968
  OCIRuntime:
    package: crun-0.10.1-1.fc31.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.10.1
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  SwapFree: 7985950720
  SwapTotal: 7985950720
  arch: amd64
  cpus: 16
  eventlogger: journald
  hostname: rauros.figura.io
  kernel: 5.3.1-300.fc31.x86_64
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: slirp4netns-0.4.0-20.1.dev.gitbbd6f25.fc31.x86_64
    Version: |-
      slirp4netns version 0.4.0-beta.3+dev
      commit: bbd6f25c70d5db2a1cd3bfb0416a8db99a75ed7e
  uptime: 17m 1.5s
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /home/returntrip/.config/containers/storage.conf
  ContainerStore:
    number: 1
  GraphDriverName: overlay
  GraphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-0.6.4-2.fc31.x86_64
      Version: |-
        fusermount3 version: 3.6.2
        fuse-overlayfs: version 0.6.4
        FUSE library version 3.6.2
        using FUSE kernel interface version 7.29
  GraphRoot: /var/home/returntrip/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 1
  RunRoot: /run/user/1000
  VolumePath: /var/home/returntrip/.local/share/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.): Physical Silverblue 31

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 17 (14 by maintainers)

Most upvoted comments

Hi. I have very similar problem today. I stopped container with podman stop and tried to re-enter with toolbox:

toolbox -v enter --container mongodb toolbox: running as real user ID 1000 toolbox: resolved absolute path for /usr/bin/toolbox to /usr/bin/toolbox toolbox: checking if /etc/subgid and /etc/subuid have entries for user lukasz toolbox: TOOLBOX_PATH is /usr/bin/toolbox toolbox: running on a cgroups v2 host toolbox: current Podman version is 1.7.0 toolbox: migration not needed: Podman version 1.7.0 is unchanged toolbox: Fedora generational core is f31 toolbox: base image is fedora-toolbox:31 toolbox: container is mongodb toolbox: checking if container mongodb exists toolbox: calling org.freedesktop.Flatpak.SessionHelper.RequestSession toolbox: starting container mongodb toolbox: /etc/profile.d/toolbox.sh already mounted in container mongodb Error: unable to start container “mongodb”: container ‘07c1fcae8ebdee7aa3815544aeac13e94abcae64794171e729ef27397e79e9dc’ already exists: OCI runtime error toolbox: failed to start container mongodb

Based on these logs I don’t believe this is the same issue. This is likely https://github.com/containers/libpod/issues/3906

You are likely running podman run --rm as part of a systemd service with KillMode set to something other than none. systemd is hitting Podman with a SIGKILL after the container exits as it attempts to remove the container.