kubernetes: "CreateContainerConfigError: failed to prepare subPath for volumeMount" error with configMap volume
/kind bug
Fix status:
- merged in master in https://github.com/kubernetes/kubernetes/pull/61080, will be in 1.10.0-rc.1
- merged in release-1.9 branch in https://github.com/kubernetes/kubernetes/pull/61107, in v1.9.5
- merged in release-1.8 branch in https://github.com/kubernetes/kubernetes/pull/61108, in v1.8.10
- merged in release-1.7 branch in https://github.com/kubernetes/kubernetes/pull/61109, in v1.7.15
What happened: Upgraded from v1.9.2 to v1.9.4. I began to bring up the cluster (still messing with different strategies on upgrade) and noticed that every pod that would mount a configMap via subPath would throw a similar error to the one below:
Mar 12 22:30:27 node02 kubelet[1124]: E0312 22:30:27.537327 1124 kubelet_pods.go:248] failed to prepare subPath for volumeMount "config" of container "mumble": subpath "/var/lib/kubelet/pods/66fa673c-266d-11e8-8ebf-00155d00a406/volumes/kubernetes.io~configmap/config/..2018_03_13_03_19_55.572152209/mumble.ini" not within volume path "/var/lib/kubelet/pods/66fa673c-266d-11e8-8ebf-00155d00a406/volumes/kubernetes.io~configmap/config"
Mar 12 22:30:27 node02 kubelet[1124]: E0312 22:30:27.537452 1124 kuberuntime_manager.go:734] container start failed: CreateContainerConfigError: failed to prepare subPath for volumeMount "config" of container "mumble"
Mar 12 22:30:27 node02 kubelet[1124]: E0312 22:30:27.537548 1124 pod_workers.go:186] Error syncing pod 66fa673c-266d-11e8-8ebf-00155d00a406 ("mumble-74798bc4c-xjwrn_default(66fa673c-266d-11e8-8ebf-00155d00a406)"), skipping: failed to "StartContainer" for "mumble" with CreateContainerConfigError: "failed to prepare subPath for volumeMount \"config\" of container \"mumble\""
What you expected to happen: To mount the configMap.
Existing behavior: https://stackoverflow.com/questions/48561338/how-to-correctly-mount-configmap-with-subpath-in-kubernetes-not-update-configs https://stackoverflow.com/questions/44325048/kubernetes-configmap-only-one-file
How to reproduce it (as minimally and precisely as possible):
kind: ConfigMap
apiVersion: v1
metadata:
name: mumble-config
data:
mumble.ini: |
# Murmur configuration file.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mumble
labels:
app: mumble
spec:
template:
metadata:
labels:
app: mumble
spec:
containers:
- image: custom-image
name: mumble
volumeMounts:
- name: config
mountPath: /data/mumble.ini
subPath: mumble.ini
volumes:
- name: config
configMap:
name: mumble-config
Anything else we need to know?:
Looking further into it it looks like a possible regression caused by #60813?
https://github.com/kubernetes/kubernetes/pull/61045/files#diff-16665fc8caff20aa7d63896dc4e3dd7fR295
Environment:
- Kubernetes version (use
kubectl version):
version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.4", GitCommit:"bee2d1505c4fe820744d26d41ecd3fdd4a3d6546", GitTreeState:"clean", BuildDate:"2018-03-12T16:29:47Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration: Self-hosted - 3 VM’s - 1 master, two nodes.
- OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.4 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.4 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
- Kernel (e.g.
uname -a):
Linux master 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
- Install tools:
- Others:
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 1
- Comments: 45 (27 by maintainers)
Commits related to this issue
- Disabled secure communications on influxdb due to issue with 1.9.4 K8s 1.9.4 forces ConfigMaps to be mounted read-only, so our hack to enable secure communications by mounting a ConfigMap with config... — committed to cloudfoundry-incubator/kubo-release by tvs 6 years ago
- pin version to 1.9.3 due to a bug with 1.9.3 1.9.4 bug: https://github.com/kubernetes/kubernetes/issues/61076 — committed to ian28223/kubernetes-vagrant-coreos-cluster by ian28223 6 years ago
- Merge pull request #8494 from wallyworld/caas-meterstatus https://github.com/juju/juju/pull/8494 ## Description of change CAAS models used to arbitrarily disallow meter status and workload versio... — committed to juju/juju by jujubot 6 years ago
- Revert "Kubernetes: 1.9.4" This reverts commit 409136ec318012344c24a2d326481708ab0fe053. There is a wonderful bug in 1.9.4: https://github.com/kubernetes/kubernetes/issues/61076 So, ride back to 1.9... — committed to evilmartians/chef-kubernetes by Bregor 6 years ago
- Merge pull request #8486 from wallyworld/k8spods-extra-config https://github.com/juju/juju/pull/8486 ## Description of change We now allow charms to specify some k8s specific pod config when send... — committed to juju/juju by jujubot 6 years ago
- update to k8s 1.9.5 (fixes https://github.com/kubernetes/kubernetes/issues/61076), sub-make ansible targets — committed to ssube/build-tools by ssube 6 years ago
FYI, in GKE this doesn’t seem to interfere with deploying new pods but keeps exiting pods in status “Terminating” after they were requested to get deleted:
yes, this applies to 1.7.14, 1.8.9, and 1.9.4. point releases to address are scheduled for 3/19
Downgraded to v1.9.3, the problem does not appear to exist.
There is not.edit: actually, you can mount the entire configmap (or secret) elsewhere in your container and symlink the path you care about to the location you want (in an initContainer, etc). as @joelsmith noted above, that’s actually a better approach since it lets you receive updates to the configmap data that propagate into the mounted volume.
I had the same issue, I am not sure if this behaviour is intended and we should use Pod Security Policies to allow the mounting of config maps ?
Gke already released last week with the configmap patch. I am looking into the cleanup issue.