kubernetes: modified subpath configmap mount fails when container restarts

/kind bug

What happened: When a container uses a configmap which is mounted with the subPath option, the configmap is changed and then the container (but not the pod) restarts the mounting of the configmap fails:

# mount a single file into a folder with preexisting data
        volumeMounts:
        - name: extra-cfg
          mountPath: /etc/puppetlabs/puppetdb/conf.d/extra.ini
          subPath: extra.ini
# change something in the configmap
kubectl edit extra-cfg
kubectl exec podname kill 1
kubectl describe podname

Warning Failed 5s kubelet, kworker-be-intg-iz2-bap006 Error: failed to start container “puppetdb”: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused “process_linux.go:402: container init caused "rootfs_linux.go:58: mounting \"/var/lib/kubelet/pods/b9ffd644-af98-11e8-a05e-246e96748774/volume-subpaths/extra-cfg/puppetdb/2\" to rootfs \"/var/lib/docker/overlay2/c8790b7f3f690c1ef7a582672e2d153062ff6b4ed1ee21aab1158897310fd3d1/merged\" at \"/var/lib/docker/overlay2/c8790b7f3f690c1ef7a582672e2d153062ff6b4ed1ee21aab1158897310fd3d1/merged/etc/puppetlabs/puppetdb/conf.d/extra.ini\" caused \"no such file or directory\""”: unknown

(The pod this happens on consists of multiple containers, I have not tested yet if it also happens in a single container pod.)

One has to delete the pod to fix the problem.

Environment: Server Version: version.Info{Major:“1”, Minor:“11”, GitVersion:“v1.11.2”, GitCommit:“bb9ffb1654d4a729bb4cec18ff088eacc153c239”, GitTreeState:“clean”, BuildDate:“2018-08-07T23:08:19Z”, GoVersion:“go1.10.3”, Compiler:“gc”, Platform:“linux/amd64”}

coreos 1800.7.0 docker 18.03.1

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 14
  • Comments: 39 (13 by maintainers)

Commits related to this issue

Most upvoted comments

I have the same issue with docker 1.11.5 using AKS. I found a workaround using projected key for volumes definition.

apiVersion: v1
kind: ConfigMap
metadata:
  name: test-configmap
  namespace: test-cfg
data:
  common1.json: |
    {
      "logger": {
        "enabled": true,
        "level": "info",
        "extreme": false,
        "stream": "stdout"
      }
    }
  common2.json: |
    {
      "test": 4         
    }
---
apiVersion: v1
kind: Pod 
metadata:
  name: issue
  namespace: test-cfg
spec:
  volumes:
  - name: conf-volume
    configMap:
      name: test-configmap
  containers:
  - name: test
    image: ubuntu:bionic
    command: ["sleep", "30"]
    resources:
      requests:
        cpu: 100m
    volumeMounts:
      - name: conf-volume
        mountPath: /etc/common1.json
        subPath: common1.json
      - name: conf-volume
        mountPath: /etc/common2.json
        subPath: common2.json
---
apiVersion: v1
kind: Pod 
metadata:
  name: working
  namespace: test-cfg
spec:
  volumes:
  - name: conf-volume
    projected:
      sources:
      - configMap:
          name: test-configmap
          items:
            - key: common1.json
              path: common1.json
            - key: common2.json
              path: common2.json
  containers:
  - name: test-container
    image: ubuntu:bionic
    imagePullPolicy: "Always"
    resources:
      requests:
        cpu: 100m
    volumeMounts:
      - name: conf-volume
        mountPath: /test
    command: ["sleep", "30"]

With this configuration , Issue pod will not restart, instead working pod will work. Steps are the same as described here

Can we get this issue reopened ?

Any reason this was closed? Looks like it’s still an issue.

It was closed because https://github.com/kubernetes/kubernetes/pull/89629 was submitted against the development branch. But you’re still going to see the problem on older versions unless a fix is backported.

And as I mentioned in https://github.com/kubernetes/kubernetes/issues/93691 I think the backport to 1.18.1 was buggy, which makes me wonder about other versions.

To work around the problem ensure that your deployments recreate the pods when you configmaps change, for example by adding a checksum of the configmaps to the annotations of the deployment/statefulset pod template.

Is there a better solution in versions before v1.19?

We are also experiencing this issue (GKE with k8s 1.15 & 1.16). As a workaround we have used slightly simpler version of workaround shown by @Zero-2 with items field for ConfigMapVolumeSource instead of using projected volume, for example:

volumes:
- name: app-config
  configMap:
    name: app-config
    items:
    - key:  config.json
      path: config.json

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#configmapvolumesource-v1-core

Issue seen on k8s 1.14.3, docker 18.06.3-ce, CoreOs Container Linux 2135.5.0 with subPath from secret in the same directory where another secret has a subPath

this seems related to:

moby/moby#37083

docker fixed this bug in the 18.06.03-ce , but I also saw this issue with kubernetes v1.13.2 and docker 18.06.03-ce