rook: User "system:serviceaccount:rook:default" cannot get configmaps
Using kube 1.8 and installing like;
helm init --upgrade
helm install --namespace rook --name rook rook-alpha/rook
kubectl -n rook create -f rook-cluster.yaml
with rook-cluster.yaml as;
---
apiVersion: rook.io/v1alpha1
kind: Cluster
metadata:
name: rook
namespace: rook
spec:
versionTag: master
dataDirHostPath:
storage:
useAllNodes: true
useAllDevices: false
deviceFilter: sdb
metadataDevice:
location:
storeConfig:
storeType: filestore
---
apiVersion: rook.io/v1alpha1
kind: Pool
metadata:
name: rook
namespace: rook
spec:
replication:
size: 3
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
name: rook-block
provisioner: rook.io/block
parameters:
pool: rook
Iโm getting;
failed to get data dirs. failed to load OSD dir map: configmaps "rook-ceph-osd-******-config" is forbidden: User "system:serviceaccount:rook:default" cannot get configmaps in the namespace "rook"
on rook-ceph-osd pods
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 22 (11 by maintainers)
@discostur A
ClusterRoleBindingis not used for the binding.For better security a
RoleBindingis used. ThisRoleBindingis created when you deploy the Rook Ceph Cluster (as seen in the https://rook.io/docs/rook/v0.8/ceph-cluster-crd.html#common-cluster-resources or the examplecluster.yamlhttps://github.com/rook/rook/blob/release-0.8/cluster/examples/kubernetes/ceph/cluster.yaml#L23-L35), it is not created through the Helm Chart which is expected, theoperator.yamlalso does not contain it as it is created โperโ Ceph Cluster to keep available privileges to a minimum.Rook Ceph Helm Chart is installed in the namespace/rook-ceph-system and my ceph cluster is installed in the namespace/rook-ceph (like in the official documentation). Problem is, that Rook Helm Chart creates a clusterrole/rook-ceph-cluster-mgmt but no clusterrolebinding/rook-ceph-cluster-mgmt. After manually creating a binding and pointing it to the rook service account everything is working now:
Thanks @galexrt for helping out ๐
Yesterday I saw a different but similar error running in the same version configuration as @grebois. The fix for me was setting both to v0.5.1.