kubernetes: NFS example: unable to mount the service
Hi guys,
I just noticed that i’m not able to mount the nfs service as described in the nfs example. Mounting using the pod’s ip works fine:
kubectl describe po nfs-server-e6qzy | grep ^IP
IP: 10.233.127.3
mount.nfs 10.233.127.3:/ /mnt
cat /mnt/index.html
Hello from NFS!
But when i use the service it doesn’t work
kubectl get svc | grep nfs
nfs-server 10.233.45.48 <none> 2049/TCP 2m
mount.nfs 10.233.45.48:/ /mnt
mount.nfs: Connection timed out
There are no error logs in the kube-proxy logs:
I0414 12:28:50.569854 1 proxier.go:415] Adding new service "default/nfs-server:" at 10.233.45.48:2049/TCP
I0414 12:28:50.569999 1 proxier.go:360] Proxying for service "default/nfs-server:" on TCP port 49291
kubectl get endpoints | grep nfs
nfs-server 10.233.127.3:2049 8m
The forwarding rule seems to be configured:
DNAT tcp -- 0.0.0.0/0 10.233.45.48 /* default/nfs-server: */ tcp dpt:2049 to:10.128.0.2:49291
This has been reproduced with flannel and calico on GCE and baremetal. I’m going to try with the proxy-mode = iptables, though i don’t know if that will change anything. Do you have any idea ?
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 23 (11 by maintainers)
the nfs image is messed up - bring up your own nfs-server and it works fine
if you need to tweak the mount points other than exports look at any entrypoint for any nfs-server docker file out there
Dockerfile:
run.sh
nfs-kernel-server:
UPDATE: 5/22/2017
in order to have NFS successfully mount via a service you need to make sure all its ports are fixes and not dynamically assigned.
check what ports are published by rpc by connecting to the running NFS server pod: (in the example below this is done after I’ve fixed the mountd port to static)
kubectl exec -it nfs-server-3989555812-rrbct -- bash
$(POD) rpcinfo -p
nfs_server_service.yaml
check that you can mount to the the pod directly
mount.nfs -v POD_IP:/exports /location_to_mount
to unmount a disconnected / dead pod volume use
umount -l /mount_location
(-l lazy umount)then check that the service is mounting:
mount.nfs -v SERVICE_IP:/exports /location_to_mount
You can’t access yourself through your Service VIP in iptables kube-proxy (i.e 1 endpoint Svc, kubectl exec into endpoint, curl svc ip won’t work) without either hairpin mode on all your veth’s (for intf in /sys/devices/virtual/net/cbr0/brif/*; do cat $intf/hairpin_mode; done) or a promiscuous mode cbr0 (netstat -i).