rook: Rook 1.0.3 throws 500 errors in the ceph dashboard
Is this a bug report or feature request?
- Bug Report
Deviation from expected behavior:
In 0.9.3, the dashboard worked correctly. In 1.0.3, it comes up and somewhat works, but throws 500 errors in the application:

Expected behavior: No red 500 error boxes
How to reproduce it (minimal and precise): common.yaml and operator.yaml from the stock examples in 1.0.3:
kubectl apply -f rook/common.yaml
kubectl apply -f rook/operator.yaml
kubectl apply -f rook/cluster.yaml
cluster.yaml:
...
dashboard:
enabled: true
ssl: false
storage:
useAllNodes: true
useAllDevices: false
# Important: Directories should only be used in pre-production environments
directories:
- path: /var/lib/rook
Istio ingress:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ceph
namespace: default
spec:
hosts:
- "ceph.example.com"
- "ceph.green.example.com"
- "ceph.blue.example.com"
gateways:
- elb-gateway.istio-system.svc.cluster.local
http:
- match:
route:
- destination:
port:
number: 8443
host: rook-ceph-mgr-dashboard.rook-ceph.svc.cluster.local
**Environment**:
* OS (e.g. from /etc/os-release):
CoreOS
* Cloud provider or hardware configuration:
AWS/Kops
* Kubernetes version (use `kubectl version`):
kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.6", GitCommit:"ab91afd7062d4240e95e51ac00a18bd58fddd365", GitTreeState:"clean", BuildDate:"2019-02-26T12:59:46Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.8", GitCommit:"4e209c9383fa00631d124c8adcc011d617339b3c", GitTreeState:"clean", BuildDate:"2019-02-28T18:40:05Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
* Storage backend status (e.g. for Ceph use `ceph health` in the [Rook Ceph toolbox]:
kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l “app=rook-ceph-tools” -o jsonpath=‘{.items[0].metadata.name}’) bash bash: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory bash: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directory bash: warning: setlocale: LC_MESSAGES: cannot change locale (en_US.UTF-8): No such file or directory bash: warning: setlocale: LC_NUMERIC: cannot change locale (en_US.UTF-8): No such file or directory bash: warning: setlocale: LC_TIME: cannot change locale (en_US.UTF-8): No such file or directory [root@ip-10-132-3-211 /]# ceph health HEALTH_WARN mons a,b are low on available space
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 3
- Comments: 33 (19 by maintainers)
was this an upgrade, or a fresh cluster setup?
do you see multiple manager pods running?
it does appear that there is some sort of caching occurring. for example, in the v13 anv v14 containers that contain the code for the clients and for the server, we can see that the string ‘stay_signed_in’ doesn’t actually appear in newer versions.
Please provide the operator and mgr pod logs