minikube: storage-provisioner addon: kube-system:storage-provisioner cannot list events in the namespace
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Please provide the following details:
Environment: minikube v0.28.2 on macOS 10.13.2 + VirtualBox
Minikube version (use minikube version): minikube version: v0.28.2
- OS (e.g. from /etc/os-release): macOSX 10.13.2 (
/etc/os-release: No such file or directory) - VM Driver (e.g.
cat ~/.minikube/machines/minikube/config.json | grep DriverName):"DriverName": "virtualbox" - ISO version (e.g.
cat ~/.minikube/machines/minikube/config.json | grep -i ISOorminikube ssh cat /etc/VERSION):"Boot2DockerURL": "file:///Users/lambert8/.minikube/cache/iso/minikube-v0.28.1.iso" - Install tools: helm/tiller
- Others:
What happened:
My minikube cluster (created yesterday) with the storage-provisioner addon enabled.
At first, I was apparently in a bad state:
kubectl describe pvc yielded the familiar “the provisioner hasn’t worked yet” warning message, and the provisioner logs were complaining about some unknown connectivity issue:
$ kubectl get sc
NAME PROVISIONER AGE
standard (default) k8s.io/minikube-hostpath 40m
$ kubectl get pvc,pv -n test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/shr86q-cloudcmd Pending standard 20s
pvc/sz8kmm-cloudcmd Pending standard 1m
$ kubectl describe pvc/shr86q-cloudcmd -n test
Name: shr86q-cloudcmd
Namespace: test
StorageClass: standard
Status: Pending
Volume:
Labels: name=shr86q-cloudcmd
service=cloudcmd
stack=shr86q
Annotations: volume.beta.kubernetes.io/storage-provisioner=k8s.io/minikube-hostpath
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 14s (x4 over 35s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
etcd-minikube 1/1 Running 0 40m
kube-addon-manager-minikube 1/1 Running 0 41m
kube-apiserver-minikube 1/1 Running 0 41m
kube-controller-manager-minikube 1/1 Running 0 41m
kube-scheduler-minikube 1/1 Running 0 41m
kubernetes-dashboard-5498ccf677-5r975 0/1 CrashLoopBackOff 11 41m
storage-provisioner 0/1 CrashLoopBackOff 11 41m
$ kubectl logs -f storage-provisioner -n kube-system
F0912 16:43:12.951200 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Upon deleting and recreating the minikube cluster (to clear the bad state), when repeating the test case I saw the following in the logs:
$ kubectl logs -f storage-provisioner -n kube-system
Error watching for provisioning success, can't provision for claim "test/s4rdfk-cloudcmd": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list events in the namespace "test"
Error watching for provisioning success, can't provision for claim "test/spd9xt-cloudcmd": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list events in the namespace "test"
The provisioner did still create a PV and Bound the PVC to it in such cases:
$ kubectl get pvc -n test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
s4rdfk-cloudcmd Bound pvc-e794fa3e-b6ac-11e8-8044-080027add193 1Mi RWX standard 12m
spd9xt-cloudcmd Bound pvc-6baa04ab-b6ad-11e8-8044-080027add193 1Mi RWX standard 8m
src67q-cloudcmd Bound pvc-2243a82c-b6ae-11e8-8044-080027add193 1Mi RWX standard 3m
What you expected to happen: The provisioner shouldn’t throw an error when provisioning was successful.
How to reproduce it (as minimally and precisely as possible):
- Bring up a fresh cluster with default addons enabled:
minikube start - Fetch a test PVC template:
wget https://gist.githubusercontent.com/bodom0015/d920e22df8ff78ee05929d4c3ae736f8/raw/edccc530bf6fa748892d47130a1311fce5513f37/test.pvc.default.yaml - Create a PVC from the template:
kubectl create -f test.pvc.default.yaml - After a few seconds, check on your PVC:
kubectl get pvc- You should see that after a few seconds, your PVC is
Boundto a PV
- You should see that after a few seconds, your PVC is
- Check the
storage-provisionerlogs
Output of minikube logs (if applicable):
minikube logs did not seem to yield any pertinent debugging information, but the storage-provisioner pod logs did yield the following error message:
$ kubectl logs -f storage-provisioner -n kube-system
E0912 16:57:17.134782 1 controller.go:682] Error watching for provisioning success, can't provision for claim "test/s4rdfk-cloudcmd": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list events in the namespace "test"
E0912 17:00:58.710095 1 controller.go:682] Error watching for provisioning success, can't provision for claim "test/spd9xt-cloudcmd": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list events in the namespace "test"
Anything else do we need to know: As a temporary manual workaround, the following seemed to work:
# Edit to add the "list" verb to the "events" resource
$ kubectl edit clusterrole -n kube-system system:persistent-volume-provisioner
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 3
- Comments: 26 (5 by maintainers)
Commits related to this issue
- Fix for kubernetes/minikube/issues/3129 — committed to mmazur/ansible-kubevirt-modules by mmazur 5 years ago
This is still present in:
Steps to Reproduce
Here is the simplest reproduction of this bug.
Step 1:
minikube startand verify cluster has started upStep 2: create any PVC - note that it does successfully provision and bind to a PV
Step 3: Check the provisioner logs to see the error message
The Problem
This error, while innocuous, indicates that the built-in ServiceAccount named
system:persistent-volume-provisionerthat is used by thestorage-provisionerPod is missing one or more required permissions, namely the ability tolisttheeventsresource.Possible Fix
If this permission is needed in more cases than not, then the correct way to fix this might be to create a PR back to
kubeadm(or the appropriate Kubernetes repo) that will add the missing permission to thesystem:persistent-volume-provisionerClusterRole.A simple way to fix this in the short-term would be to create a thin ClusterRole (or a full copy of
system:persistent-volume-provisioner) that would grant the missing permission. This could possibly go here:Still present
In my case minikube worked well until I stopped it. Now it fails at startup, the VM is running but won’t finish configuring.
Anyone knows where in the VM the config files are stored so I can manually edit them?
@tstromberg I installed a DB, in my case RethinkDB, with tiller/helm. Wait for it to install and provision everything.
After VM reboot I keep getting: