kubernetes: Containerized cluster cannot mount default secrets
When running a containerized kubernetes cluster I cannot spawn pods due to file does not exist errors when trying to mount their secret volumes:
$ kubectl create -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
EOF
pod "busybox-sleep" created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-sleep 0/1 ContainerCreating 0 15s
k8s-master-127.0.0.1 3/3 Running 0 8m
$ kubectl describe pod busybox-sleep
Name: busybox-sleep
Namespace: default
Image(s): busybox
Node: 127.0.0.1/127.0.0.1
Start Time: Fri, 04 Dec 2015 22:48:56 +0000
Labels: <none>
Status: Pending
Reason:
Message:
IP:
Replication Controllers: <none>
Containers:
busybox:
Container ID:
Image: busybox
Image ID:
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-6dwl1:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-6dwl1
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
32s 32s 1 {scheduler } Scheduled Successfully assigned busybox-sleep to 127.0.0.1
32s 10s 3 {kubelet 127.0.0.1} FailedMount Unable to mount volumes for pod "busybox-sleep_default": IsLikelyNotMountPoint("/var/lib/kubelet/pods/333c85c3-9ad9-11e5-aeed-f6b7b4d3b3d7/volumes/kubernetes.io~secret/default-token-6dwl1"): file does not exist
32s 10s 3 {kubelet 127.0.0.1} FailedSync Error syncing pod, skipping: IsLikelyNotMountPoint("/var/lib/kubelet/pods/333c85c3-9ad9-11e5-aeed-f6b7b4d3b3d7/volumes/kubernetes.io~secret/default-token-6dwl1"): file does not exist
The secrets are there as expected:
$ kubectl get secrets
NAME TYPE DATA AGE
default-token-6dwl1 kubernetes.io/service-account-token 2 8m
Here are the full kubelet logs: https://gist.github.com/2opremio/6fcbae55d83f20bc3c82
The single-node containerized cluster is created using this script, which is based on https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md and spawns a cluster based on a recent version of master (https://github.com/kubernetes/kubernetes/tree/81b6ac47557359429e519d889d9ad48b5c8a2411).
This is the docker version I am running:
$ docker info
Containers: 7
Images: 62
Storage Driver: overlay
Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.13-boot2docker
Operating System: Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015
CPUs: 1
Total Memory: 996.2 MiB
Name: default
ID: 77RC:MDFB:UNPM:62Z6:FYLF:TZTG:43NZ:77DD:IL22:QI6S:LDSV:UN6W
Debug mode (server): true
File Descriptors: 54
Goroutines: 90
System Time: 2015-12-04T23:01:25.308787463Z
EventsListeners: 0
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Labels:
provider=virtualbox
$ docker version
Client:
Version: 1.8.1
API version: 1.20
Go version: go1.4.2
Git commit: d12ea79
Built: Thu Aug 13 02:49:29 UTC 2015
OS/Arch: darwin/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.3
Git commit: a34a1d5
Built: Fri Nov 20 17:56:04 UTC 2015
OS/Arch: linux/amd64
The client runs in an OSX laptop and the daemon runs inside a VM provisioned with docker-machine create --driver virtualbox --engine-storage-driver overlay default
Surprisingly, everything works when using a vagrant VM provisioned with https://github.com/weaveworks/weave/blob/vagrant-pin-docker/Vagrantfile, so the problem must somehow be environmental but I haven’t managed to triage the difference causing the problem.
About this issue
- Original URL
- State: closed
- Created 9 years ago
- Comments: 38 (20 by maintainers)
Hey Guys,
Thanks to @gravis in (https://github.com/openshift/origin/issues/3072), I think I figured it out (at least for the single node containerized kubernetes environment: https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md).
The steps I followed was:
and the the actual command
I am using kubernetes version v1.2.0.alpha.7. Also please note that in the main command, I added the shared mount and removed the “containerized” parameter. Also, I am using the latest version of Docker (1.10).
Hope this is useful to someone. I can submit a pull request for changing the documentation for the single node setup if others can validate that these steps work and are the right approach.