rook: PVC creation pending due to deadline exception on external cluster using official 1.10 rook docs which remain after apply a fix to issue 8696

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior: PVC stuck in pending

Expected behavior: PVC should be created?

How to reproduce it (minimal and precise):

I followed the instructions provided here: https://rook.io/docs/rook/v1.10/CRDs/Cluster/external-cluster/

  • Applied all all Kubernetes manifest via argocd. (crds, common, operator, common-external, cluster-external) yaml files

  • Manually Generate ceph cluster environment variables via shell script on source cluster

  • Manually Set environment variables and ran import-external cluster shell script.

  • Create a sample PVC using rbd SC, which remains pending. (I will attach logs)

  • Before I opened this issue, I tried the solutions for issue: #8696: 1 - I ran rbd pool init rook_rbd_storage then recreated the PVC. No change, PVC remains pending. 2 - I tried creating the toolbox, but it doesn’t seem to work with external clusters.

File(s) to submit:

  • Cluster CR (custom resource), typically called cluster.yaml, if necessary

Using https://raw.githubusercontent.com/rook/rook/3bccf60c5fd853fb80ecd4d3e8e0d146aa7226a9/deploy/examples/cluster-external.yaml

Logs to submit:

  • Operator’s logs, if necessary

kubectl_logs_ceph_operator.txt

  • Crashing pod(s) logs, if necessary

kubectl_logs_rbdplugin-provisioner.txt

To get logs, use kubectl -n <namespace> logs <pod name> When pasting logs, always surround them with backticks or use the insert code button from the Github UI. Read GitHub documentation if you need help.

Cluster Status to submit: kubectl_describe_cephcluster.txt

  • Output of krew commands, if necessary `ceph status cluster: id: a896bcd1-6089-470f-a973-85f2fabe5149 health: HEALTH_OK

    services: mon: 5 daemons, quorum nuc9034,nuc9036,nuc9038,nuc9039,nuc9037 (age 45h) mgr: nuc9035(active, since 22h), standbys: nuc9036, nuc9034 mds: 2/2 daemons up, 2 standby osd: 8 osds: 8 up (since 2d), 8 in (since 2d)

    data: volumes: 2/2 healthy pools: 7 pools, 169 pgs objects: 14.12k objects, 54 GiB usage: 161 GiB used, 20 TiB / 20 TiB avail pgs: 169 active+clean

    io: client: 600 KiB/s wr, 0 op/s rd, 135 op/s wr `

    To get the health of the cluster, use kubectl rook-ceph health To get the status of the cluster, use kubectl rook-ceph ceph status For more details, see the Rook Krew Plugin

Environment:

  • OS (e.g. from /etc/os-release): PRETTY_NAME="Ubuntu 22.04.1 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.1 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy
  • Kernel (e.g. uname -a): Linux k3s-node1 5.15.0-1021-kvm #26-Ubuntu SMP Tue Oct 25 18:39:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
  • Cloud provider or hardware configuration:
  • Rook version (use rook version inside of a Rook Pod): rook version rook: v1.10.6 go: go1.18.7
  • Storage backend version (e.g. for ceph do ceph -v): Inside Rook Pod: ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable) ProxMox Ceph Version: 16.2.9
  • Kubernetes version (use kubectl version): kubectl version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:17:11Z", GoVersion:"go1.18.2", Compiler:"gc", Platform:"darwin/arm64"} Kustomize Version: v4.5.4 Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3+k3s1", GitCommit:"f2585c1671b31b4b34bddbb3bf4e7d69662b0821", GitTreeState:"clean", BuildDate:"2022-10-25T19:59:38Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): ‘K3S on 7 Intel Nucs with dual NICs and VLANs’
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox): Unable to bring up toolbox.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 1
  • Comments: 24 (9 by maintainers)

Most upvoted comments

I can confirm that this was the solution for OKD 4.10 Fedora CoreOS 35 https://github.com/okd-project/okd/issues/1160

Need to add parameter to cephfs storageclass kernelMountOptions: wsync

It works! Horay

Found the problem there was Error from server (AlreadyExists): error when creating “STDIN”: storageclasses.storage.k8s.io “ceph-rbd” already exists when I deleted resources, forgot to delete SC.

Fixed by deleting old and created new SC with valid namespace parameter. PVC is successfully bounded.

Thank you so much. Owe you a beer