k3d: [BUG] cannot mount nfs shares from inside pods

What did you do

i initially tested openebs nfs-provisioner, on top of k3d default local-path storage class… pvc where created, but pods could not mount them, saying “not permitted” or “not supported”… i could mount the shares from inside the openebs nfs sharing pods, even between them (a pod could mount its own shares, AND the shares of the other pod, sharing a different pvc)… but NO OTHER pods could mount them, they all remain in containerCreating state, and i’ve those errors in events…

so i tried a different solution, an nfs server docker container running on my host machine, and connect to it using the nfs subdir provisioner, with identical results, so it seems i cannot get an RWX volume on k3d right now, whatever solution i do… tested on both my dev machine (macbook pro, big sur latest) AND on an ubuntu 22.04 vm (with of course the nfs-common package installed on it)

  • How was the cluster created?
docker network create --subnet="172.22.0.0/16" --gateway="172.22.0.1" "internalNetwork"

k3d cluster create test --network internalNetwork

mkdir -p ~/nfsshare

docker run -d --net=internalNetwork -p 2049:2049 --name nfs --privileged -v ~/nfsshare:/nfsshare -e SHARED_DIRECTORY=/nfsshare itsthenetwork/nfs-server-alpine:latest

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner     --set nfs.server=host.k3d.internal --set nfs.path=/

the pod stays in “containerCreating” state, and in events i get:

Message:             MountVolume.SetUp failed for volume "nfs-subdir-external-provisioner-root" : mount failed: exit status 255
Mounting command: mount
Mounting arguments: -t nfs host.k3d.internal:/ /var/lib/kubelet/pods/ddeba612-5e1c-4ae6-8068-f641f42706ca/volumes/kubernetes.io~nfs/nfs-subdir-external-provisioner-root
Output: mount: mounting host.k3d.internal:/ on /var/lib/kubelet/pods/ddeba612-5e1c-4ae6-8068-f641f42706ca/volumes/kubernetes.io~nfs/nfs-subdir-external-provisioner-root failed: Not supported

so, let’s try from an ubuntu pod

kubectl run ubuntu --image ubuntu sleep infinity
# shell inside, then:
apt update
apt install nfs-common -y
mkdir t
mount -t nfs host.k3d.internal:/ t # host is correctly resolved using dig...
mount.nfs: Operation not permitted

test from host to see if the share works: it does…

mkdir ~/nfsshare
sudo mount -t nfs localhost:/ t0
touch t0/aaa
ls t0 # aaa exists
ls ~/nfsshare # and is visible in the share
touch ~/nfsshare/bbb
ls ~/nfsshare # bbb exists in share
ls t0 # and is visible in local folder

What did you expect to happen

share should be mountable, to create rwx volumes

Which OS & Architecture

  • output of k3d runtime-info
arch: x86_64
cgroupdriver: systemd
cgroupversion: "2"
endpoint: /var/run/docker.sock
filesystem: extfs
name: docker
os: Ubuntu 22.04 LTS
ostype: linux
version: 20.10.12

Which version of k3d

  • output of k3d version
k3d version v5.4.4
k3s version v1.23.8-k3s1 (default)

Which version of docker

  • output of docker version and docker info
Client:
 Version:           20.10.12
 API version:       1.41
 Go version:        go1.17.3
 Git commit:        20.10.12-0ubuntu4
 Built:             Mon Mar  7 17:10:06 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          20.10.12
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.17.3
  Git commit:       20.10.12-0ubuntu4
  Built:            Mon Mar  7 15:57:50 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.6.6
  GitCommit:        10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
 runc:
  Version:          1.1.3
  GitCommit:        v1.1.3-0-g6724737f
 docker-init:
  Version:          0.19.0
  GitCommit:
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 5
  Running: 5
  Paused: 0
  Stopped: 0
 Images: 24
 Server Version: 20.10.12
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
 runc version: v1.1.3-0-g6724737f
 init version:
 Security Options:
  apparmor
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.15.0-41-generic
 Operating System: Ubuntu 22.04 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 3.827GiB
 Name: lima-ubuntu
 ID: EMKE:7WSJ:7M3C:3JFT:HS7J:MNZJ:SQKG:Y3RY:2S7Q:PMMO:DHRL:6K3P
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Comments: 16 (2 by maintainers)

Commits related to this issue

Most upvoted comments

I’ve run into same issue before. I tried clean up all resources (image container volume network), then create cluster. Again and again, repeat it and finally succeed. But I don’t known what happend. That’s why I’m looking for some way to patch origin image.

@fragolinux

It’s not a bug of k3d but a defect of k3s docker image.

K3s docker image is build from scratch with no nfs support.Dockerfile

As as result of that, both k3s node container and pods inside of the node can not mount nfs.

There is a workaround: rebase k3s image with alpine and install nfs-utils.

FROM alpine:latest

RUN set -ex; \
    apk add --no-cache iptables ip6tables nfs-utils; \
    echo 'hosts: files dns' > /etc/nsswitch.conf

COPY --from=rancher/k3s:v1.24.3-k3s1 /bin /opt/k3s/bin

VOLUME /var/lib/kubelet
VOLUME /var/lib/rancher/k3s
VOLUME /var/lib/cni
VOLUME /var/log

ENV PATH="$PATH:/opt/k3s/bin:/opt/k3s/bin/aux"
ENV CRI_CONFIG_FILE="/var/lib/rancher/k3s/agent/etc/crictl.yaml"

ENTRYPOINT ["/opt/k3s/bin/k3s"]
CMD ["agent"]

Build it yourself or have a look at mine. maoxuner/k3s (not managed frequently)

I don’t known how to patch nfs-utils into official k3s image. Anyone know it please tell me.

I am not currently experiencing the issues @jlian is experiencing.

I have created a repository with the latest images from the 1.25, 1.26, and 1.27 channels, as well as the “stable” channel at https://github.com/ryan-mcd/k3s-containers

Feel free to utilize these images.

@iwilltry42 Ok thanks, got it to work. Now just needs k3d cluster create -i ghcr.io/jlian/k3d-nfs:v1.25.3-k3s1

Took me a while to find the entrypoint logs in /var/log as opposed to docker container logs. Also noticed that this custom entrypoint method doesn’t work on k3d v4 and older.

@jlian , instead of disabling k3d’s entrypoints (there are actually multiple), just add your script to the list by putting it here /bin/k3d-entrypoint-*.sh, replacing the * with the name of your script. This will make k3d execute it alongside the other entrypoint scripts. I hope that we can expose this more easily using the lifecycle hooks at some point.

Hey I got NFS to work in k3d for GitHub codespace based on the info from this thread.

https://github.com/jlian/k3d-nfs

It’s mostly the same as @marcoaraujojunior’s commit https://github.com/marcoaraujojunior/k3s-docker/commit/914c6f84e0b086ba86b15806062771d9fae5c274 with touch /run/openrc/softlevel added to the entrypoint script and also figuring out to set export K3D_FIX_CGROUPV2=false so that the entrypoint isn’t overridden.

Try with

export K3D_FIX_CGROUPV2=false
k3d cluster create -i ghcr.io/jlian/k3d-nfs:v1.25.3-k3s1