podman: podman run: gives error while loading shared libraries: libc.so.6: cannot change memory protections

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

kind bug

Description podman run gives error while trying to run a container

# podman run -i -t registry.fedoraproject.org/fedora bash
bash: error while loading shared libraries: libtinfo.so.6: cannot change memory protections

Describe the results you expected: podman should run container and give bash prompt inside container

Additional information you deem important (e.g. issue happens only occasionally):

Tried few things but dind’t get fixed:

  • Reinstalled container-selinux package and restorecon -R -v /var/lib/containers
  • Reinstalled podman and ran restorecon -R -v /var/lib/containers
  • Removed everything from /var/lib/containers and /home/root/containers/

Note: podman run works with selinux set to Permissive

Output of podman version: podman-1.2.0-2.git3bd528e.fc29.aarch64

# podman version
Version:            1.2.0
RemoteAPI Version:  1
Go Version:         go1.11.5
OS/Arch:            linux/arm64

Output of podman info --debug:

# podman info --debug
debug:
  compiler: gc
  git commit: ""
  go version: go1.11.5
  podman version: 1.2.0
host:
  BuildahVersion: 1.7.2
  Conmon:
    package: podman-1.2.0-2.git3bd528e.fc29.aarch64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.12.0-dev, commit: a5b8d77e006ee972d9bbfd37699da552c934e33a'
  Distribution:
    distribution: fedora
    version: "29"
  MemFree: 15248670720
  MemTotal: 16781996032
  OCIRuntime:
    package: runc-1.0.0-93.dev.gitb9b6cc6.fc29.aarch64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8+dev
      commit: b8b7b8ec668cd816610ec7be29cf2cef2b62c8ae
      spec: 1.0.1-dev
  SwapFree: 8480878592
  SwapTotal: 8480878592
  arch: arm64
  cpus: 8
  hostname: apm-mustang-ev3-04.lab.eng.brq.redhat.com
  kernel: 5.0.17-200.fc29.aarch64
  os: linux
  rootless: false
  uptime: 59m 36.72s
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 2
  GraphDriverName: overlay
  GraphOptions:
  - overlay.mountopt=nodev
  GraphRoot: /home/root/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 1
  RunRoot: /home/root/containers/storage
  VolumePath: /home/root/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.):

  • F29, Physical machine (X-Gene Mustang Board), aarch64

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 46 (22 by maintainers)

Most upvoted comments

Ok I missed this up front. If you are going to move the container storage to another location, then you will need to fix the labels.

/home/root/containers/storage

# semanage fcontext -a -e /var/lib/containers /home/root/containers
# restorecon -R -v /home/root/containers

Also you should not move RunRoot, just leave it on /run.

RunRoot: /home/root/containers/storage

This should be available to root users. You want the runroot stored on a tmpfs, so a reboot cleans it out.

RunRoot: /var/run/containers/storage

Could you run restorecon -R -v $HOME/.local/share/containers

This might be a problem in silverblue or any rpmostree based OS. Since rpm post install scripts do not run.
Basically container-selinux had to fix labels in users homedirs because of a change in the linux kernel.

This was caused by a kernel update allowing for a new feature. We saw this coming, and fixed it in F34 and Rawhide, before it hit, or as soon as it hit. We had a fix for this in F33, but the package was not building, and no one noticed it until people started complaining.

This usually means container-selinux is not properly installed

$ sudo yum reinstall container-selinux $ restorecon -R -v $HOME

Should fix the problem.