kubernetes: NFS example does not work on google container engine.

The nfs example at /examples/nfs does not work on google container engine.

The nfs-server runs but the nfs-busybox won’t mount the PersistentVolumeClaim.

The busybox pod errors out trying to mount the persistent volume claim.

Output: mount.nfs: Connection timed out

The nfs-pv uses the nfs-server service ip. Both the persistent volume and the persistent volume claim are bound.

I did notice the nfs server logged a warning.

rpcinfo: can't contact rpcbind: : RPC: Unable to receive; errno = Connection refused

I have tried exposing additional ports 111 tcp/udp 2049/udp but that had no effect.

Please help.

#kubectl describe service nfs-server
Name:           nfs-server
Namespace:      default
Labels:         <none>
Selector:       role=nfs-server
Type:           ClusterIP
IP:         10.19.247.137
Port:           <unset> 2049/TCP
Endpoints:      10.16.3.4:2049
Session Affinity:   None
No events.
#kubectl describe pv nfs
Name:       nfs
Labels:     <none>
Status:     Bound
Claim:      default/nfs
Reclaim Policy: Retain
Access Modes:   RWX
Capacity:   1Mi
Message:    
Source:
    Type:   NFS (an NFS mount that lasts the lifetime of a pod)
    Server: 10.19.247.137
    Path:   /
    ReadOnly:   false
#kubectl describe pvc nfs
Name:       nfs
Namespace:  default
Status:     Bound
Volume:     nfs
Labels:     <none>
Capacity:   1Mi
Access Modes:   RWX
#kubectl describe pod nfs-server-e71xs
Name:       nfs-server-e71xs
Namespace:  default
Node:       gke-fieldphone-32335ca1-node-9o0q/10.128.0.4
Start Time: Fri, 22 Apr 2016 13:39:57 -0700
Labels:     role=nfs-server
Status:     Running
IP:     10.16.3.4
Controllers:    ReplicationController/nfs-server
Containers:
  nfs-server:
    Container ID:   docker://d0f11148b09986163c73baf525d57b4a59b3bce149f1776f117adcb444993a5c
    Image:      gcr.io/google_containers/volume-nfs
    Image ID:       docker://3f8217a3a8f1e891612aece9cbf8b8defeb1f1ffa39836ebb7de5e03139f56a7
    Port:       2049/TCP
    QoS Tier:
      cpu:  Burstable
      memory:   BestEffort
    Requests:
      cpu:      100m
    State:      Running
      Started:      Fri, 22 Apr 2016 13:39:59 -0700
    Ready:      True
    Restart Count:  0
    Environment Variables:
Conditions:
  Type      Status
  Ready     True 
Volumes:
  default-token-szz1v:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-szz1v
Events:
  FirstSeen LastSeen    Count   From                        SubobjectPath           Type        Reason      Message
  --------- --------    -----   ----                        -------------           --------    ------      -------
  29m       29m     1   {default-scheduler }                                Normal      Scheduled   Successfully assigned nfs-server-e71xs to gke-fieldphone-32335ca1-node-9o0q
  29m       29m     1   {kubelet gke-fieldphone-32335ca1-node-9o0q} spec.containers{nfs-server} Normal      Pulling     pulling image "gcr.io/google_containers/volume-nfs"
  29m       29m     1   {kubelet gke-fieldphone-32335ca1-node-9o0q} spec.containers{nfs-server} Normal      Pulled      Successfully pulled image "gcr.io/google_containers/volume-nfs"
  29m       29m     1   {kubelet gke-fieldphone-32335ca1-node-9o0q} spec.containers{nfs-server} Normal      Created     Created container with docker id d0f11148b099
  29m       29m     1   {kubelet gke-fieldphone-32335ca1-node-9o0q} spec.containers{nfs-server} Normal      Started     Started container with docker id d0f11148b099
#kubectl describe pod nfs-busybox-fu4el
Name:       nfs-busybox-fu4el
Namespace:  default
Node:       gke-fieldphone-32335ca1-node-00f2/10.128.0.9
Start Time: Fri, 22 Apr 2016 13:49:25 -0700
Labels:     name=nfs-busybox
Status:     Pending
IP:     
Controllers:    ReplicationController/nfs-busybox
Containers:
  busybox:
    Container ID:   
    Image:      busybox
    Image ID:       
    Port:       
    Command:
      sh
      -c
      while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done
    QoS Tier:
      cpu:  Burstable
      memory:   BestEffort
    Requests:
      cpu:      100m
    State:      Waiting
      Reason:       ContainerCreating
    Ready:      False
    Restart Count:  0
    Environment Variables:
Conditions:
  Type      Status
  Ready     False 
Volumes:
  nfs:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nfs
    ReadOnly:   false
  default-token-szz1v:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-szz1v
Events:
  FirstSeen LastSeen    Count   From                        SubobjectPath   Type        Reason          Message
  --------- --------    -----   ----                        -------------   --------    ------          -------
  19m       14m     21  {default-scheduler }                        Warning     FailedScheduling    PersistentVolumeClaim is not bound: "nfs"
  13m       13m     4   {default-scheduler }                        Warning     FailedScheduling    PersistentVolumeClaim 'default/nfs' is not in cache
  13m       13m     1   {default-scheduler }                        Normal      Scheduled       Successfully assigned nfs-busybox-fu4el to gke-fieldphone-32335ca1-node-00f2
  11m       1m      5   {kubelet gke-fieldphone-32335ca1-node-00f2}         Warning     FailedMount     Unable to mount volumes for pod "nfs-busybox-fu4el_default(ce04b0d3-08ca-11e6-a6a8-42010af000bb)": Mount failed: exit status 32
Mounting arguments: 10.19.247.137:/ /var/lib/kubelet/pods/ce04b0d3-08ca-11e6-a6a8-42010af000bb/volumes/kubernetes.io~nfs/nfs nfs []
Output: mount.nfs: Connection timed out


  11m   1m  5   {kubelet gke-fieldphone-32335ca1-node-00f2}     Warning FailedSync  Error syncing pod, skipping: Mount failed: exit status 32
Mounting arguments: 10.19.247.137:/ /var/lib/kubelet/pods/ce04b0d3-08ca-11e6-a6a8-42010af000bb/volumes/kubernetes.io~nfs/nfs nfs []
Output: mount.nfs: Connection timed out

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 7
  • Comments: 63 (34 by maintainers)

Commits related to this issue

Most upvoted comments

Hi NFS users, the support for NFS V4 on GCI image is available on release 1.4.7 and 1.5. Please let us know if there is any issue. NFS V3 is not supported yet currently. Thanks!

All users, now NFSv3 is also supported on GKE, please give it a try and let us know if there is any problem. Thanks!

I think somebody have push on gcr.io/google_containers/volume-nfs the builded image of the pr https://github.com/kubernetes/kubernetes/pull/22665

That’s why actually it’s not work any more @rootfs

@arenoir it looks like you got things working, but I believe we should still correct our docs, at least.

@arenoir I have solved this problem, see if my circumstance also applies for you:

I found that there are some problems within the image gcr.io/google_containers/volume-nfs, See if there is /mnt/data directory in the nfs-server container.

The docker file says that it copies index.html to /mnt/data/index.html. But after I execute kubectl exec -it nfs-server bash and ls /mnt. I fount there is no data directory inside. So I rebuild the image and use the new one. After that this issue solved.