rook: Can not start rook-ceph using multiple namespaces

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior:

From the provided documentation and example files it is unclear how to start a working ceph cluster via rook in a separate namespace. Attempting to follow the recommendation of changing common.yaml below the line containing the text “Beginning of cluster-specific resources.”. The previous namespace value of “rook-ceph” is changed to “rook1”.

Everything seems to start up ok at first but then the following appears in the logs:

I0626 18:22:48.166453       6 controller.go:818] Started provisioner controller ceph.rook.io/block_rook-ceph-operator-566967f57-7fbc9_6608d036-983f-11e9-958c-c686a7df9e39!
I0626 18:22:48.557850       6 controller.go:818] Started provisioner controller rook.io/block_rook-ceph-operator-566967f57-7fbc9_660922c8-983f-11e9-958c-c686a7df9e39!
2019-06-26 18:22:49.403566 I | operator: successfully started Ceph csi drivers
2019-06-26 18:22:49.404563 I | op-cluster: starting cluster in namespace rook1
2019-06-26 18:22:55.406255 W | op-k8sutil: OwnerReferences will not be set on resources created by rook. failed to test that it can be set. configmaps is forbidden: User "system:serviceaccount:rook-ceph:rook-ceph-system" cannot create resource "configmaps" in API group "" in the namespace "rook1"
2019-06-26 18:22:55.422701 I | op-k8sutil: waiting for job rook-ceph-detect-version to complete...
2019-06-26 18:24:35.455210 E | op-cluster: unknown ceph major version. failed to get version job log to detect version. failed to read from stream. pods "rook-ceph-detect-version-7pr96" is forbidden: User "system:serviceaccount:rook-ceph:rook-ceph-system" cannot get resource "pods/log" in API group "" in the namespace "rook1"
2019-06-26 18:24:37.412338 I | op-k8sutil: Removing previous job rook-ceph-detect-version to start a new one
2019-06-26 18:24:37.440975 I | op-k8sutil: batch job rook-ceph-detect-version still exists
2019-06-26 18:24:39.446598 I | op-k8sutil: batch job rook-ceph-detect-version deleted
2019-06-26 18:24:39.461157 I | op-k8sutil: waiting for job rook-ceph-detect-version to complete...

As someone who is not an expert in RBAC its very unclear what else I am supposed to change.

Expected behavior:

More context and guidance in the documentation or comments in the YAML would be greatly welcome.

How to reproduce it (minimal and precise):

A new namespace rook1 was created. Namespace values in the “cluster specific resources” were edited to “rook1”. Rook configured with started with ROOK_CURRENT_NAMESPACE_ONLY set to “false” and a new cluster cr is created in the “rook1” namespace. Rook fails to read status of the version discovery pod.

Environment:

  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Cloud provider or hardware configuration:
  • Rook version (use rook version inside of a Rook Pod): v1.0.0-154.g004f795
  • Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.0+d4cacc0", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2018-12-06T18:30:39Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 2
  • Comments: 16 (9 by maintainers)

Most upvoted comments

@phlogistonjohn Thanks, we will close this once my PR gets merged.

@leseb it turns out I got things working late yesterday but hadn’t gotten back to update this issue yet. In my case I just studied the templates in the tests and the differences between the old tempates and found there were a few namespace values in the latter half of “common.yaml” I should not have changed to match my 2nd namespace. However your changes there do seem like they’d do the trick of having a documented way to deploy into different namespaces. More explicit breadcrumbs in the docs might be nice too though. Thanks to @martin31821 as well but I am not using helm at this time.