kubernetes: Mounting configmap as volume sometimes causes FailedMount error

For some reason, while trying to create a deployment with a configmap mounted as a volume, I will sometimes have a pod stuck in ContainerCreating and describing the pod gives an error like so:

FirstSeen   LastSeen    Count   From                            SubobjectPath   Type        Reason      Message
  --------- --------    -----   ----                            -------------   --------    ------      -------
  3m        3m      1   {default-scheduler }                            Normal      Scheduled   Successfully assigned core-deployment-214446676-kp9k6 to gke-cluster-main-default-pool-6482d459-fjzn
  1m        1m      1   {kubelet gke-cluster-main-default-pool-6482d459-fjzn}           Warning     FailedMount Unable to mount volumes for pod "core-deployment-214446676-kp9k6_default(63a2ee79-4e8d-11e6-92bf-42010af001cd)": timeout expired waiting for volumes to attach/mount for pod "core-deployment-214446676-kp9k6"/"default". list of unattached/unmounted volumes=[core-config-files]
  1m        1m      1   {kubelet gke-cluster-main-default-pool-6482d459-fjzn}           Warning     FailedSync  Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "core-deployment-214446676-kp9k6"/"default". list of unattached/unmounted volumes=[core-config-files]

Here is the yaml I’m working with:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: core-deployment
spec:
  replicas: 2
  template:
    metadata:
      name: core
      labels:
        app: core
    spec:
      containers:
        - name: core
          image: gcr.io/infrastructure-1373/hello-node:v3
          ports:
            - containerPort: 5000
          volumeMounts:
            - name: core-config-files
              mountPath: /etc/core-config
      volumes:
        - name: core-config-files
          configMap:
            name: core-config-files

and output of kubectl describe configmaps core-config-files:

Name:       core-config-files
Namespace:  default
Labels:     <none>
Annotations:    kubectl.kubernetes.io/last-applied-configuration={"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"core-config-files","creationTimestamp":null},"data":{...long...}

Data
====
saml.yml:   381 bytes
general.yml:    1109 bytes
saml-meta.xml:  4432 bytes

and the output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.2", GitCommit:"9bafa3400a77c14ee50782bb05f9efc5c91b3185", GitTreeState:"clean"}

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 39 (19 by maintainers)

Commits related to this issue

Most upvoted comments

I’m working on a fix for this. The problem began with this commit: https://github.com/kubernetes/kubernetes/commit/3567b1f9c48bb6879fbe7d36b8c634e849f59438 But the actual problem is that wrappedVolumeSpec is incorrectly reused for different config map mounts: https://github.com/kubernetes/kubernetes/blob/d7150bfaeae642efc08c8ede0ed2ec8ecb340c8e/pkg/volume/configmap/configmap.go#L125

NewWrapperMounter is called here: https://github.com/kubernetes/kubernetes/blob/d7150bfaeae642efc08c8ede0ed2ec8ecb340c8e/pkg/volume/configmap/configmap.go#L140 Which patches a structure pointed to from the aforementioned spec https://github.com/kubernetes/kubernetes/blob/d7150bfaeae642efc08c8ede0ed2ec8ecb340c8e/pkg/kubelet/volume_host.go#L79-L91

spec.Volume.Name = wrapperVolumeName

thus causing race condition when several configmaps are mounted in parallel. Namely, emptyDir plugin thinks that the volume is ready while it isn’t here https://github.com/kubernetes/kubernetes/blob/d7150bfaeae642efc08c8ede0ed2ec8ecb340c8e/pkg/volume/empty_dir/empty_dir.go#L191

This may also be a problem for other plugins utilizing inner emptyDir.

I’ve been seeing this issue too, but I for sure only am using one namespace so no conflicting names, I think. But I am using the same configmaps in different pods, which may trigger it?

I have a lot of configmaps/secrets and have been doing a lot of rolling upgrades as I develop things. I was on kubernetes 1.3.0 and just did an upgrade to try and see if that would fix things. Unsure yet as I just did it about 5 min ago.

I remember seeing an issue about restarting kubelet will break things as it forgets about existing pods volumes. I know restarting made it worse. I also seem to remember a comment in that issue about there may be a similar race on pod delete, which I may have seen. Not sure though. I can’t find the issue right now.