rook: failed to set login credentials for the ceph dashboard
Is this a bug report or feature request?
- Bug Report operator fails to initialize dashboard op-mgr: failed modules: “dashboard”. failed to initialize dashboard: failed to set login credentials for the ceph dashboard: failed to set login creds on mgr: failed to complete command for set dashboard creds Deviation from expected behavior:
Expected behavior: Cluster should be accessible via Dashboard How to reproduce it (minimal and precise):
File(s) to submit:
- Cluster CR (custom resource), typically called
cluster.yaml, if necessary - Operator’s logs, if necessary
- Crashing pod(s) logs, if necessary
cluster yaml config with 3 mon and host networking:
To get logs, use
kubectl -n <namespace> logs <pod name>When pasting logs, always surround them with backticks or use theinsert codebutton from the Github UI. Read Github documentation if you need help.
Environment:
-
OS (e.g. from /etc/os-release):
-
centos 7
-
Kernel (e.g.
uname -a): -
3.10.0-1062.el7.x86_64
-
Cloud provider or hardware configuration:
-
Rook version (use
rook versioninside of a Rook Pod): -
Rook v1.5.6
-
Storage backend version (e.g. for ceph do
ceph -v): -
ceph version 15.2.8
-
Kubernetes version (use
kubectl version): -
v1.18.2
-
Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
-
Storage backend status (e.g. for Ceph use
ceph healthin the Rook Ceph toolbox):
operator log:
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 21 (7 by maintainers)
@travisn still not able to access dashboard rook ceph-mgr is not listening on port 8443: k -n rook-ceph logs rook-ceph-mgr-a-78c45f5c46-sql85 | grep Serving [23/Apr/2021:16:40:26] ENGINE Serving on http://10.x.x.x:9283 Also, Operator log shows :2021-04-30 04:17:26.871838 E | op-mgr: failed modules: “dashboard”. failed to initialize dashboard: failed to set login credentials for the ceph dashboard: failed to set login creds on mgr: failed to complete command for set dashboard creds: context deadline exceeded Not sure what the workaround for this ? Has this been assigned to someone yet ?
@travisn yes that is correct. re-zapping the disks. osds are back in service thank you!