kubernetes: NFS example: unable to mount the service

Hi guys,

I just noticed that i’m not able to mount the nfs service as described in the nfs example. Mounting using the pod’s ip works fine:

kubectl describe po nfs-server-e6qzy | grep ^IP
IP:     10.233.127.3

mount.nfs 10.233.127.3:/ /mnt

cat /mnt/index.html
Hello from NFS!

But when i use the service it doesn’t work

kubectl get svc | grep nfs
nfs-server   10.233.45.48   <none>        2049/TCP   2m

mount.nfs 10.233.45.48:/ /mnt
mount.nfs: Connection timed out

There are no error logs in the kube-proxy logs:

I0414 12:28:50.569854       1 proxier.go:415] Adding new service "default/nfs-server:" at 10.233.45.48:2049/TCP
I0414 12:28:50.569999       1 proxier.go:360] Proxying for service "default/nfs-server:" on TCP port 49291

kubectl get endpoints  | grep nfs
nfs-server   10.233.127.3:2049   8m

The forwarding rule seems to be configured:

DNAT       tcp  --  0.0.0.0/0            10.233.45.48         /* default/nfs-server: */ tcp dpt:2049 to:10.128.0.2:49291

This has been reproduced with flannel and calico on GCE and baremetal. I’m going to try with the proxy-mode = iptables, though i don’t know if that will change anything. Do you have any idea ?

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 23 (11 by maintainers)

Most upvoted comments

the nfs image is messed up - bring up your own nfs-server and it works fine

if you need to tweak the mount points other than exports look at any entrypoint for any nfs-server docker file out there

Dockerfile:

FROM ubuntu:14.04

ENV DEBIAN_FRONTEND noninteractive

RUN apt-get update -qq  \
    && apt-get install -y nfs-kernel-server nfs-common \  \
    && mkdir /exports \
    && echo "/exports *(rw,fsid=0,insecure,no_root_squash)" >> /etc/exports \
    && echo "Serving /exports" \
    && /usr/sbin/exportfs -a

EXPOSE 111/udp 2049/tcp

COPYCOPY nfs-kernel-server /etc/default/
 /etc/default/
COPY run.sh /run.sh
ENTRYPOINT  [ "/run.sh" ]

run.sh

#!/bin/bash
echo "Starting NFS Server"

rpcbind
service nfs-kernel-server start

echo "Started"

echo "Done all tasks - Running continious loop to keep this container alive"
while true; do
  sleep 3600
done

nfs-kernel-server:

# Number of servers to start up
RPCNFSDCOUNT=8

# Runtime priority of server (see nice(1))
RPCNFSDPRIORITY=0

# Options for rpc.mountd.
# If you have a port-based firewall, you might want to set up
# a fixed port here using the --port option. For more information,
# see rpc.mountd(8) or http://wiki.debian.org/SecuringNFS
# To disable NFSv4 on the server, specify '--no-nfs-version 4' here
RPCMOUNTDOPTS="--port 20048 --no-nfs-version 4"

# Do you want to start the svcgssd daemon? It is only required for Kerberos
# exports. Valid alternatives are "yes" and "no"; the default is "no".
NEED_SVCGSSD=""

# Options for rpc.svcgssd.
RPCSVCGSSDOPTS=""

# Options for rpc.nfsd.
RPCNFSDOPTS=""

UPDATE: 5/22/2017

in order to have NFS successfully mount via a service you need to make sure all its ports are fixes and not dynamically assigned.

check what ports are published by rpc by connecting to the running NFS server pod: (in the example below this is done after I’ve fixed the mountd port to static)

kubectl exec -it nfs-server-3989555812-rrbct -- bash

$(POD) rpcinfo -p

   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  58681  nlockmgr
    100021    3   udp  58681  nlockmgr
    100021    4   udp  58681  nlockmgr
    100021    1   tcp  42438  nlockmgr
    100021    3   tcp  42438  nlockmgr
    100021    4   tcp  42438  nlockmgr
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    2   udp  20048  mountd
    100005    2   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd

nfs_server_service.yaml

kind: Service
apiVersion: v1
metadata:
  name: nfs-server
spec:
  ports:
    - name: nfs
      port: 2049
    - name: mountd
      port: 20048
    - name: rpcbind
      port: 111
    - name: rpcbind-udp
      port: 111
      protocol: UDP
  selector:
    role: nfs-server

check that you can mount to the the pod directly mount.nfs -v POD_IP:/exports /location_to_mount

to unmount a disconnected / dead pod volume use umount -l /mount_location (-l lazy umount)

then check that the service is mounting: mount.nfs -v SERVICE_IP:/exports /location_to_mount

mount.nfs: timeout set for Sun May 21 14:19:04 2017
mount.nfs: trying text-based options 'vers=4,addr=100.71.151.43,clientaddr=172.20.178.222'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'addr=100.71.151.43'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 100.71.151.43 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 100.71.151.43 prog 100005 vers 3 prot UDP port 20048
mount.nfs: portmap query retrying: RPC: Timed out
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: trying 100.71.151.43 prog 100005 vers 3 prot TCP port 20048

You can’t access yourself through your Service VIP in iptables kube-proxy (i.e 1 endpoint Svc, kubectl exec into endpoint, curl svc ip won’t work) without either hairpin mode on all your veth’s (for intf in /sys/devices/virtual/net/cbr0/brif/*; do cat $intf/hairpin_mode; done) or a promiscuous mode cbr0 (netstat -i).