podman: Non-root container user can't access mounted host directory

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

After upgrading from podman v1.6.4 to a current dev version from git master (2.0.0-dev) i am no longer able to access files and/or directories on the host respectively on a mounted volume the way i used to be when i was still using v1.6.4. I would guess that this is most likely not a bug but some sort of configuration problem i need help with.

Steps to reproduce the issue:

I use podman in rootless mode, so i have created a dedicated technical user called fwd to operate podman.

  1. As user fwd start a container with a mounted volume:

    podman run --user $(id -u):$(id -g) -dit --volume /tmp/testdir:/tmp/testdir --name centos 
    registry.centos.org/centos:8
    

    where id -u is 1020 and id -g is 1020.

    I have to specify –user $(id -u)😒(id -g) since i don’t want the ‘active’ user within the container to be root. The reason is that the container (and thus the user fwd) is going to invoke several scripts on the host (respectivley within the mounted volumes) that MUST
    NOT be run as root.

  2. Access the container

    podman exec -it <container_id> /bin/bash

    As expected the active user within the container is fwd respectively 1020:

    bash-4.4$ whoami 
    1020 
    bash-4.4$ id -u 
    1020 
    bash-4.4$ id -g 
    1020 
    
  3. Now i want to create a file within the the mounted volume respectively on the host which fails due to missing privileges:

    touch /tmp/testdir/test.txt 
    touch: cannot touch '/tmp/testdir/test.txt': Permission denied 
    

    On the host testdir belongs to user fwd (run ls -la on the host):

    $ ls -la /tmp/testdir/
    total 8 
    drwxrwxr-x   2 fwd  fwd  4096 Apr 16 09:01 
    

    However, within the container testdir belongs to root (run ls -la within the container):

    bash-4.4$ ls -la /tmp/testdir/
    total 8 
    drwxrwxr-x 2 root root 4096 Apr 16 08:01 .
    
  4. Get it working by assigning the directory to huser

    What actually works is to assign the the directory to huser:

    podman top <container_id> user huser
    USER   HUSER
    1020   297627
    
    chown -R 297627:297627 /tmp/testdir
    

    Now, within the container testdir belongs to 1020 (run ls -la within the container):

    bash-4.4$ ls -la /tmp/testdir
    total 8
    drwxrwxr-x 2 1020 1020 4096 Apr 16 08:01 .
    

    The file can now be created:

    touch /tmp/testdir/test.txt <br>
    

Describe the results you received:

By assigning the huser to the mounted host volume i am able gain write access from within the container.

Describe the results you expected:

I am actually not quite sure what to expect. What is the recommended approach/best practice for such a scenario? Is assigning the huser to the needed host directories really what needs to be done or is there any other recommended way? Should –uidmap be utilized in any way ?

When i was still using v1.6.4 the host directories belonged to fwd and could always be accessed from within the container.

So what i am looking for is actually some advice or best practive for dealing with a scenario where the active container user can’t be root and has to be able to access host volumes.

Additional information you deem important (e.g. issue happens only occasionally):

  • podman is used/running in rootless mode
  • I run podman based on a current build/pull from git master (using make package-install)

Output of podman version:

Version: 2.0.0-dev
RemoteAPI Version: 1
Go Version: go1.12.12
Git Commit: dd234f71d6c1b6f1dce0d0c7428b2d2a7783ee41-dirty
Built: Thu Apr 16 06:46:52 2020
OS/Arch: linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  gitCommit: dd234f71d6c1b6f1dce0d0c7428b2d2a7783ee41-dirty
  goVersion: go1.12.12
  podmanVersion: 2.0.0-dev
host:
  arch: amd64
  buildahVersion: 1.14.8
  cgroupVersion: v1
  conmon:
    package: podman-2.0.0-1587016003.gitd6b3bc18.el8.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.7, commit: dd234f71d6c1b6f1dce0d0c7428b2d2a7783ee41-dirty'
  cpus: 2
  distribution:
    distribution: '"centos"'
    version: "8"
  eventLogger: journald
  hostname: daisyt
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-147.5.1.el8_1.x86_64
  memFree: 31875878912
  memTotal: 105331625984
  ociRuntime:
    name: runc
    package: runc-1.0.0-15.2.el8.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc10
      commit: db2349efc4dc0001462089382d175ed38fdba742
      spec: 1.0.1-dev
  os: linux
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 1048571904
  swapTotal: 1048571904
  uptime: 3h 50m 53.53s (Approximately 0.12 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 2
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-2.0.0-1587016003.gitd6b3bc18.el8.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):

We run podman on a bare metal CentOS 8 server.

cat /etc/os-release:

NAME="CentOS Linux"
VERSION="8 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Linux 8 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-8"
CENTOS_MANTISBT_PROJECT_VERSION="8"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="8"

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 17 (11 by maintainers)

Most upvoted comments

I think you may be misunderstanding how rootless Podman works. Rootless Podman is not, and will never be, root; it’s not a setuid binary, and gains no privileges when it runs. Instead, we use a user namespace to shift the UIDs and GIDs of a block of users we are given access to on the host (via the newuidmap and newgidmap executables) and your own user within the containers we create. Root in the container is actually your user on the host, UID/GID 1 is the first UID/GID specified in your user’s mapping in /etc/subuid and /etc/subgid, etc. If you mount a directory from the host into a rootless container and create a file in that directory as root in the container, you’ll see it’s actually owned by your user.

We do recognize that this doesn’t really match how many people intend to use rootless Podman - they want their UID inside and outside the container to match. Thus, we provide the --userns=keep-id flag, which ensures that your user is mapped to its own UID and GID inside the container. It sounds like this will resolve your problem. However, given what I’ve said, I’m surprised that this worked at all in previous Podman versions; it doesn’t sound like you running as the user you thought you were.