kubernetes: "CreateContainerConfigError: failed to prepare subPath for volumeMount" error with configMap volume

/kind bug

Fix status:

What happened: Upgraded from v1.9.2 to v1.9.4. I began to bring up the cluster (still messing with different strategies on upgrade) and noticed that every pod that would mount a configMap via subPath would throw a similar error to the one below:

Mar 12 22:30:27 node02 kubelet[1124]: E0312 22:30:27.537327    1124 kubelet_pods.go:248] failed to prepare subPath for volumeMount "config" of container "mumble": subpath "/var/lib/kubelet/pods/66fa673c-266d-11e8-8ebf-00155d00a406/volumes/kubernetes.io~configmap/config/..2018_03_13_03_19_55.572152209/mumble.ini" not within volume path "/var/lib/kubelet/pods/66fa673c-266d-11e8-8ebf-00155d00a406/volumes/kubernetes.io~configmap/config"
Mar 12 22:30:27 node02 kubelet[1124]: E0312 22:30:27.537452    1124 kuberuntime_manager.go:734] container start failed: CreateContainerConfigError: failed to prepare subPath for volumeMount "config" of container "mumble"
Mar 12 22:30:27 node02 kubelet[1124]: E0312 22:30:27.537548    1124 pod_workers.go:186] Error syncing pod 66fa673c-266d-11e8-8ebf-00155d00a406 ("mumble-74798bc4c-xjwrn_default(66fa673c-266d-11e8-8ebf-00155d00a406)"), skipping: failed to "StartContainer" for "mumble" with CreateContainerConfigError: "failed to prepare subPath for volumeMount \"config\" of container \"mumble\""

What you expected to happen: To mount the configMap.

Existing behavior: https://stackoverflow.com/questions/48561338/how-to-correctly-mount-configmap-with-subpath-in-kubernetes-not-update-configs https://stackoverflow.com/questions/44325048/kubernetes-configmap-only-one-file

How to reproduce it (as minimally and precisely as possible):

kind: ConfigMap
apiVersion: v1
metadata:
  name: mumble-config
data:
  mumble.ini: |
    # Murmur configuration file.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mumble
  labels:
    app: mumble
spec:
  template:
    metadata:
      labels:
        app: mumble
    spec:
      containers:
      - image: custom-image
        name: mumble
        volumeMounts:
        - name: config
          mountPath: /data/mumble.ini
          subPath: mumble.ini
      volumes:
      - name: config
        configMap:
          name: mumble-config

Anything else we need to know?:

Looking further into it it looks like a possible regression caused by #60813?

https://github.com/kubernetes/kubernetes/pull/61045/files#diff-16665fc8caff20aa7d63896dc4e3dd7fR295

Environment:

  • Kubernetes version (use kubectl version):
version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.4", GitCommit:"bee2d1505c4fe820744d26d41ecd3fdd4a3d6546", GitTreeState:"clean", BuildDate:"2018-03-12T16:29:47Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: Self-hosted - 3 VM’s - 1 master, two nodes.
  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.4 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.4 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
  • Kernel (e.g. uname -a):
Linux master 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 1
  • Comments: 45 (27 by maintainers)

Commits related to this issue

Most upvoted comments

FYI, in GKE this doesn’t seem to interfere with deploying new pods but keeps exiting pods in status “Terminating” after they were requested to get deleted:

Mar 19 09:19:53 gke-app-cluster-app-cluster-pool-78a17572-lxk8 kubelet[1331]: E0319 09:19:53.528768    1331 nestedpendingoperations.go:263] Operation for "\"kubernetes.io/secret/c89a1204-2b4f-11e8-aca8-42010a9c0114-<redacted>\" (\"c89a1204-2b4f-11e8-aca8-42010a9c0114\")" failed. No retries permitted until 2018-03-19 09:21:55.528737036 +0000 UTC m=+3052.192633373 (durationBeforeRetry 2m2s). Error: "error cleaning subPath mounts for volume \"<redacted>\" (UniqueName: \"kubernetes.io/secret/c89a1204-2b4f-11e8-aca8-42010a9c0114-<redacted>\") pod \"c89a1204-2b4f-11e8-aca8-42010a9c0114\" (UID: \"c89a1204-2b4f-11e8-aca8-42010a9c0114\") : error checking /var/lib/kubelet/pods/c89a1204-2b4f-11e8-aca8-42010a9c0114/volume-subpaths/<redacted>/cipher/0 for mount: lstat /var/lib/kubelet/pods/c89a1204-2b4f-11e8-aca8-42010a9c0114/volume-subpaths/<redacted>/cipher/0/..: not a directory

yes, this applies to 1.7.14, 1.8.9, and 1.9.4. point releases to address are scheduled for 3/19

Downgraded to v1.9.3, the problem does not appear to exist.

Is there a workaround for this issue?

There is not.

edit: actually, you can mount the entire configmap (or secret) elsewhere in your container and symlink the path you care about to the location you want (in an initContainer, etc). as @joelsmith noted above, that’s actually a better approach since it lets you receive updates to the configmap data that propagate into the mounted volume.

I had the same issue, I am not sure if this behaviour is intended and we should use Pod Security Policies to allow the mounting of config maps ?

Gke already released last week with the configmap patch. I am looking into the cleanup issue.