rook: ceph: Dashboard throws errors

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior: Dashboard home does not work

Expected behavior: Dashboard home page should work

How to reproduce it (minimal and precise): Update ceph from v14.2.0-20190410 to v14.2.1-20190430

Log on the mgr pod:

::ffff:100.192.13.24 - - [03/May/2019:11:10:03] "GET /api/health/minimal HTTP/1.1" 500 1646 "https://rook-ceph-dev-primary.k8s.conmpany.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/73.0.3683.86 Chrome/73.0.3683.86 Safari/537.36"
debug 2019-05-03 11:10:03.630 7f0474a6e700  0 log_channel(cluster) log [DBG] : pgmap v193: 200 pgs: 100 active+undersized, 100 active+clean; 0 B data, 278 MiB used, 3.8 TiB / 3.9 TiB avail
debug 2019-05-03 11:10:05.631 7f0474a6e700  0 log_channel(cluster) log [DBG] : pgmap v194: 200 pgs: 100 active+undersized, 100 active+clean; 0 B data, 278 MiB used, 3.8 TiB / 3.9 TiB avail
debug 2019-05-03 11:10:07.631 7f0474a6e700  0 log_channel(cluster) log [DBG] : pgmap v195: 200 pgs: 100 active+undersized, 100 active+clean; 0 B data, 278 MiB used, 3.8 TiB / 3.9 TiB avail
::ffff:100.192.13.24 - - [03/May/2019:11:10:07] "GET /api/summary HTTP/1.1" 200 257 "https://rook-ceph-dev-primary.k8s.conmpany.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/73.0.3683.86 Chrome/73.0.3683.86 Safari/537.36"
::ffff:100.192.13.24 - - [03/May/2019:11:10:07] "GET /api/feature_toggles HTTP/1.1" 200 66 "https://rook-ceph-dev-primary.k8s.conmpany.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/73.0.3683.86 Chrome/73.0.3683.86 Safari/537.36"
::ffff:100.192.13.24 - - [03/May/2019:11:10:07] "GET /api/prometheus/notifications?from=last HTTP/1.1" 200 22 "https://rook-ceph-dev-primary.k8s.conmpany.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/73.0.3683.86 Chrome/73.0.3683.86 Safari/537.36"
debug 2019-05-03 11:10:08.072 7f0438724700  0 mgr[rook] Completion <read op> threw an exception:
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/rook/module.py", line 191, in wait
    c.execute()
  File "/usr/share/ceph/mgr/rook/module.py", line 57, in execute
    self._result = self.cb()
  File "/usr/share/ceph/mgr/rook/module.py", line 132, in <lambda>
    return RookReadCompletion(lambda: f(*args, **kwargs))
  File "/usr/share/ceph/mgr/rook/module.py", line 348, in describe_service
    assert service_type in ("mds", "osd", "mgr", "mon", "nfs", None), service_type + " unsupported"
AssertionError: iscsi unsupported
[03/May/2019:11:10:08] HTTP 
Request Headers:
  AUTHORIZATION: Bearer XXXXXX
  REFERER: https://rook-ceph-dev-primary.k8s.conmpany.com/
  X-FORWARDED-HOST: rook-ceph-dev-primary.k8s.conmpany.com
  Remote-Addr: ::ffff:100.192.13.24
  X-REAL-IP: 127.0.0.1
  USER-AGENT: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/73.0.3683.86 Chrome/73.0.3683.86 Safari/537.36
  CONNECTION: close
  COOKIE: _ga=GA1.2.1677006237.1530686008; session_id=d057bedfc9b0909a5da6956eb3d4b8650dce540e
  ACCEPT: application/json, text/plain, */*
  PRAGMA: no-cache
  X-FORWARDED-PROTO: https
  X-ORIGINAL-URI: /api/health/minimal
  HOST: rook-ceph-dev-primary.k8s.conmpany.com
  CACHE-CONTROL: no-cache
  X-SCHEME: https
  ACCEPT-LANGUAGE: en-US,en;q=0.9,de;q=0.8
  X-FORWARDED-FOR: 127.0.0.1
  X-FORWARDED-PORT: 443
  X-REQUEST-ID: b8ebc25bb44f0b89f890092e3b508d89
  ACCEPT-ENCODING: gzip, deflate, br
[03/May/2019:11:10:08] HTTP Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cherrypy/_cprequest.py", line 656, in respond
    response.body = self.handler()
  File "/usr/lib/python2.7/site-packages/cherrypy/lib/encoding.py", line 188, in __call__
    self.body = self.oldhandler(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/cherrypy/_cptools.py", line 221, in wrap
    return self.newhandler(innerfunc, *args, **kwargs)
  File "/usr/share/ceph/mgr/dashboard/services/exception.py", line 88, in dashboard_exception_handler
    return handler(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/cherrypy/_cpdispatch.py", line 34, in __call__
    return self.callable(*self.args, **self.kwargs)
  File "/usr/share/ceph/mgr/dashboard/controllers/__init__.py", line 649, in inner
    ret = func(*args, **kwargs)
  File "/usr/share/ceph/mgr/dashboard/controllers/health.py", line 197, in minimal
    return self.health_minimal.all_health()
  File "/usr/share/ceph/mgr/dashboard/controllers/health.py", line 62, in all_health
    result['iscsi_daemons'] = self.iscsi_daemons()
  File "/usr/share/ceph/mgr/dashboard/controllers/health.py", line 126, in iscsi_daemons
    gateways = IscsiGatewaysConfig.get_gateways_config()['gateways']
  File "/usr/share/ceph/mgr/dashboard/services/iscsi_config.py", line 93, in get_gateways_config
    for instance in instances:
TypeError: 'NoneType' object is not iterable

debug 2019-05-03 11:10:08.073 7f0438724700  0 mgr[dashboard] [03/May/2019:11:10:08] HTTP Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cherrypy/_cprequest.py", line 656, in respond
    response.body = self.handler()
  File "/usr/lib/python2.7/site-packages/cherrypy/lib/encoding.py", line 188, in __call__
    self.body = self.oldhandler(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/cherrypy/_cptools.py", line 221, in wrap
    return self.newhandler(innerfunc, *args, **kwargs)
  File "/usr/share/ceph/mgr/dashboard/services/exception.py", line 88, in dashboard_exception_handler
    return handler(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/cherrypy/_cpdispatch.py", line 34, in __call__
    return self.callable(*self.args, **self.kwargs)
  File "/usr/share/ceph/mgr/dashboard/controllers/__init__.py", line 649, in inner
    ret = func(*args, **kwargs)
  File "/usr/share/ceph/mgr/dashboard/controllers/health.py", line 197, in minimal
    return self.health_minimal.all_health()
  File "/usr/share/ceph/mgr/dashboard/controllers/health.py", line 62, in all_health
    result['iscsi_daemons'] = self.iscsi_daemons()
  File "/usr/share/ceph/mgr/dashboard/controllers/health.py", line 126, in iscsi_daemons
    gateways = IscsiGatewaysConfig.get_gateways_config()['gateways']
  File "/usr/share/ceph/mgr/dashboard/services/iscsi_config.py", line 93, in get_gateways_config
    for instance in instances:
TypeError: 'NoneType' object is not iterable

ceph’s status (i went to the dashboard to actually increase the pgs):

  cluster:
    id:     a07354be-df7f-43d4-b292-733187b45ee3
    health: HEALTH_WARN
            Degraded data redundancy: 100 pgs undersized
            too few PGs per OSD (25 < min 30)
 
  services:
    mon:        5 daemons, quorum a,b,c,d,e (age 12m)
    mgr:        a(active, since 11m)
    osd:        32 osds: 32 up (since 8m), 32 in (since 8m)
    rbd-mirror: 3 daemons active (15161, 35273, 44884)
 
  data:
    pools:   2 pools, 200 pgs
    objects: 0 objects, 0 B
    usage:   32 GiB used, 3.8 TiB / 3.9 TiB avail
    pgs:     100 active+clean
             100 active+undersized

Environment:

  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Cloud provider or hardware configuration:
  • Rook version (use rook version inside of a Rook Pod):
  • Kubernetes version (use kubectl version):
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 10
  • Comments: 37 (22 by maintainers)

Most upvoted comments

the only workaround i’ve managed to come up with is to create a new copy of the built-in administrator role, remove iscsi permissions from this role, and assign this new role to the admin. presumably once the issue is fixed upstream one can delete the new role and reassign the administrator role to the admin user.

ceph dashboard ac-role-create admin-no-iscsi

for scope in dashboard-settings log rgw prometheus grafana nfs-ganesha manager hosts rbd-image config-opt rbd-mirroring cephfs user osd pool monitor; do
    ceph dashboard ac-role-add-scope-perms admin-no-iscsi ${scope} create delete read update;
done

ceph dashboard ac-user-set-roles admin admin-no-iscsi

I’m going to fix this in the dashboard.

I can confirm it works with the new nautilus image v14.2.2-20190722.

It would maybe interesting to give the commands for the rollback of @noahdesu workaround which has probably been used my tons of people.

EDIT: to the extent of my knowledge, the rollback would be:

# replaces the fixed role by the original one
ceph dashboard ac-user-set-roles admin administrator
# deleting the temporary role
ceph dashboard ac-role-delete admin-no-iscsi

If someone could validate.

I’m running a fresh install of v1.0 having followed the flex volume steps for RKE and I notice this error in the browser console

{
    "status": "500 Internal Server Error",
    "version": "3.2.2",
    "detail": "The server encountered an unexpected condition which prevented it from fulfilling the request.",
    "traceback": "Traceback (most recent call last):\n  File \"/usr/lib/python2.7/site-packages/cherrypy/_cprequest.py\", line 656, in respond\n    response.body = self.handler()\n  File \"/usr/lib/python2.7/site-packages/cherrypy/lib/encoding.py\", line 188, in __call__\n    self.body = self.oldhandler(*args, **kwargs)\n  File \"/usr/lib/python2.7/site-packages/cherrypy/_cptools.py\", line 221, in wrap\n    return self.newhandler(innerfunc, *args, **kwargs)\n  File \"/usr/share/ceph/mgr/dashboard/services/exception.py\", line 88, in dashboard_exception_handler\n    return handler(*args, **kwargs)\n  File \"/usr/lib/python2.7/site-packages/cherrypy/_cpdispatch.py\", line 34, in __call__\n    return self.callable(*self.args, **self.kwargs)\n  File \"/usr/share/ceph/mgr/dashboard/controllers/__init__.py\", line 649, in inner\n    ret = func(*args, **kwargs)\n  File \"/usr/share/ceph/mgr/dashboard/controllers/health.py\", line 197, in minimal\n    return self.health_minimal.all_health()\n  File \"/usr/share/ceph/mgr/dashboard/controllers/health.py\", line 62, in all_health\n    result['iscsi_daemons'] = self.iscsi_daemons()\n  File \"/usr/share/ceph/mgr/dashboard/controllers/health.py\", line 126, in iscsi_daemons\n    gateways = IscsiGatewaysConfig.get_gateways_config()['gateways']\n  File \"/usr/share/ceph/mgr/dashboard/services/iscsi_config.py\", line 93, in get_gateways_config\n    for instance in instances:\nTypeError: 'NoneType' object is not iterable\n"
}

the only workaround i’ve managed to come up with is to create a new copy of the built-in administrator role, remove iscsi permissions from this role, and assign this new role to the admin. presumably once the issue is fixed upstream one can delete the new role and reassign the administrator role to the admin user.

ceph dashboard ac-role-create admin-no-iscsi

for scope in dashboard-settings log rgw prometheus grafana nfs-ganesha manager hosts rbd-image config-opt rbd-mirroring cephfs user osd pool monitor; do
    ceph dashboard ac-role-add-scope-perms admin-no-iscsi ${scope} create delete read update;
done

ceph dashboard ac-user-set-roles admin admin-no-iscsi

Thanks that worked. 👍 What was the default role called in case I want to remove this workaround once it is fixed?

another idea would be to instead run

ceph orchestrator set backend ""

which disables the rook integration into the Ceph Dashboard.

Have the same problem with dashboard module. Rook version is 1.0.0.

Can confirm it too. Dashboard is working with the image v14.2.2-20190722, without any configuration changes.