kubernetes: glusterfs volumes not working on GKE

I’m having the issue described here. I’m on GKE trying to connect to a gluster cluster on GCE witin the same project. I have a working gluster cluster. I’m able to mount the gluster volume manually from within a container but I can’t seem to mount it as a kubernetes volume. My GKE container cluster was on node version 1.4.7. I tried upgrading 1.5.1 to see if that would help (it didn’t seem to).

My container never comes up. kubectl describe pod my-nginx produces the following error.

Warning
FailedMount
MountVolume.SetUp failed for volume "kubernetes.io/glusterfs/d8c69908-de8b-11e6-a90f-42010a8a020e-volume-name" (spec.Name: "volume-name") pod "d8c69908-de8b-11e6-a90f-42010a8a020e" (UID: "d8c69908-de8b-11e6-a90f-42010a8a020e") with: glusterfs: mount failed: mount failed: exit status 1

Mounting command: /home/kubernetes/bin/mounter
Mounting arguments: 10.138.0.7:/volume-name /var/lib/kubelet/pods/d8c69908-de8b-11e6-a90f-42010a8a020e/volumes/kubernetes.io~glusterfs/volume-name glusterfs [log-level=ERROR log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/volume-name/wordpress-2973412056-g7npl-glusterfs.log]

Output: Running mount using a rkt fly container

run: group "rkt" not found, will use default gid when rendering images
WARNING: getfattr not found, certain checks will be skipped..
Mount failed. Please check the log file for more details.

 the following error information was pulled from the glusterfs log to help diagnose this issue:

 [2017-01-19 21:11:46.454247] E [MSGID: 108006] [afr-common.c:3880:afr_notify] 0-volume-name-replicate-0: All subvolumes are down. Going offline until atleast one of the
 m comes back up.
 [2017-01-19 21:11:46.453221] E [MSGID: 101075] [common-utils.c:306:gf_resolve_ip6] 0-resolver: getaddrinfo failed (Temporary failure in name resolution)

Deployment with a mount (fails with error above)

My failing config looks like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-nginx
  labels:
    app: mine
spec:
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mine
        tier: nginx
    spec:
      containers:
      - image: nginx
        name: my-nginx
        ports:
        - containerPort: 80
          name: my-nginx
        volumeMounts:
        - name: volume-name
          mountPath: /usr/share/nginx/html
      volumes:
      - name: volume-name
        glusterfs:
          endpoints: glusterfs-cluster
          # doesn't seem to matter if it's /volume-name or volume-name
          path: /volume-name
          readOnly: true

Test deployment (works)

I wanted to test that my gluster cluster was working as expected. I decided to create a test image that closely matched the GCI mounter image.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-test
  labels:
    app: mine
spec:
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mine
        tier: test
    spec:
      containers:
      - image: ubuntu:xenial
        name: test
        command:
        - "bin/sleep"
        - "infinity"
        securityContext:
         capabilities: {}
         privileged: true

I was able to connect to that running image and test somethings out.

# open bash on the test container
kubectl exec -ti my-test-2787244864-dv57b -- bash -il

# install glusterfs-client just like the GCI mounter does
apt-get update && apt-get install -y netbase nfs-common=1:1.2.8-9ubuntu12 glusterfs-client=3.7.6-1ubuntu1

# attempt to mount the volume
mount -t glusterfs 10.138.0.7:/volume-name /mnt

# write to it
touch /mnt/testfile

All of my tests worked! When I log into my gluster cluster and mount the volume I can see the test file.

Service and Endpoints

apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs-cluster
  labels:
    app: mine
subsets:
  - addresses:
    - hostname: gfs-cluster1-server-1
      ip: 10.138.0.7
    ports:
    - port: 24007
      name: gluster
      protocol: TCP
  - addresses:
    - hostname: gfs-cluster1-server-2
      ip: 10.138.0.8
    ports:
    - port: 24007
      name: gluster
      protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: glusterfs-cluster
  labels:
    app: mine
spec:
  ports:
    - port: 24007
      name: gluster
      protocol: TCP

Gluster Cluster

I created my gluster cluster in GCE. I changed a few things from this example. Most notably I’m not messing with static IPS as that example did. Essentially I’m standing up Ubuntu Xenial, installing glusterfs-server and starting a volume named volume-name. Again, I’m fairly certain that my gluster cluster is in fine working order.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 1
  • Comments: 18 (6 by maintainers)

Most upvoted comments

Right now if you are using node instances (hosts) to set up gluster server, it will have DNS problem. There is an open PR #42376 to address this.

If you could set up gluster server using pod, it should work. An example in issue #24249. Please kindly let me know if you have any problem with this example. Thanks!

  1. Glusterfs Pod
apiVersion: v1
kind: Pod 
metadata:
  name: gluster-server
  labels:
    k8s-app: glusterfs
spec:
  containers:
  - name: gluster-server
    image: gcr.io/google_containers/volume-gluster:0.5
    ports:
      - name: gluster
        hostPort: 24007
        containerPort: 24007
        protocol: TCP
      - name: glusters
        hostPort: 49152
        containerPort: 49152
        protocol: TCP
      - name: glusterfs
        hostPort: 24008
        containerPort: 24008
        protocol: TCP
  1. Glusterfs Service
kind: Service
metadata:
  name: gluster-service
  labels:
    app: mine
spec:
  ports:
    - port: 24007
      name: gluster
      protocol: TCP
  selector:
    k8s-app: glusterfs
  1. Gluster client
apiVersion: v1
kind: Pod
metadata:
  name: gluster-client
  labels:
    name: gluster
spec:
  containers:
  - name: app-pod
    image: redis 
    volumeMounts:
    - name: webapp 
      mountPath: /gluster
  volumes:
  - name: webapp
    glusterfs:
      endpoints: gluster-service
      path: "test_vol"
      readOnly: false