rancher: [BUG] rancher keeps logging errors

Rancher Server Setup

  • Rancher version: v2.6.12
  • Installation option (Docker install/Helm Chart): Docker install
    • If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc):
  • Proxy/Cert Details:

Information about the Cluster

  • Kubernetes version:
  • Cluster Type (Local/Downstream):
    • If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider):

User Information

  • What is the role of the user logged in? (Admin/Cluster Owner/Cluster Member/Project Owner/Project Member/Custom)
    • If custom, define the set of permissions:

Describe the bug

To Reproduce

  1. install rancher 2.6.12

Result

rancher keeps logging errors like this

2023/05/08 01:32:06 [ERROR] error syncing 'local': handler global-admin-cluster-sync: failed to get GlobalRoleBinding for 'globaladmin-user-8p5d7': %!!(MISSING)w(<nil>), requeuing

Expected Result

Screenshots

Additional context

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 4
  • Comments: 21 (5 by maintainers)

Most upvoted comments

If you want to fix the log message only and not the permission issue, you can update cluster.status.Condition[].type=GlobalAdminsSynced condition to true.

 - lastUpdateTime: "2023-05-30T08:54:49-04:00"
    status: "False"
    type: GlobalAdminsSynced

=>

 - lastUpdateTime: "2023-05-30T08:54:49-04:00"
    status: "True"
    type: GlobalAdminsSynced

+1, we’re seeing the same issue after updating from 2.6.11 to 2.6.12. However, we need 2.6.12 for k8s 1.24.13 support, so it would be helpful if this can be fixed in 2.6.13.

@VladimirMesea you need to delete the keys reason and message of the GlobalAdminsSynced status.

Thanks @VladimirMesea this worked for me

@VladimirMesea you need to delete the keys reason and message of the GlobalAdminsSynced status.

I had the same problem and resolved it by editing the clusters.management.cattle.io object using the following command:

kubectl edit clusters.management.cattle.io <cluster-id>

I changed the status from: status: 'False'

to: status: 'True'

Additionally, I removed the error message and reason. After making these changes, the error was gone.