rancher: [BUG] Rancher logs are consistently generating warnings every few seconds for k8s 1.27 downstream clusters

Rancher Server Setup

  • Rancher version: v2.8-head efc48ac
  • Installation option (Docker install/Helm Chart): Docker install
    • If Helm Chart, Kubernetes Info: k3s Cluster Type (RKE1, RKE2, k3s, EKS, etc): Node Setup: 1 node Version: v1.27.5+k3s1
  • Proxy/Cert Details: self-signed

Describe the bug Rancher logs are filled with the warning logs scrolling through every few seconds continuously.

To Reproduce

  1. Crate a rancher server on v2.8-head
  2. Verify the rancher logs
  3. Create a downstream k3s node driver cluster v1.27.5+k3s1, 3 nodes - 1 etcd, 1 cp, 1 worker

Result Rancher logs spammed with the warnings: W0919 02:51:59.262869 38 warnings.go:80] Use tokens from the TokenRequest API or manually created secret-based tokens instead of auto-generated secret-based tokens.

Expected Result Expected the warning logs not scroll through in the rancher logs.

Screenshots 2023-09-18_19-56-46

Additional Info

  • Also noticed on an rke1 1.27 k8s cluster
  • This is not seen for an rke1 cluster on k8s 1.26
  • When we delete the k3s cluster, the warnings are gone
  • Also not noticed when the k3s cluster is on k8s v1.26.8+k3s1

About this issue

  • Original URL
  • State: closed
  • Created 9 months ago
  • Comments: 17 (16 by maintainers)

Most upvoted comments

@markusewalker Scenario 7-9 don’t appear to be valid to me, since they involve upgrading from an alpha version of 2.8. Can you run at least one re-test with:

  • Start from a valid 2.7.x version (with a 1.26 downstream, can be any distribution)
  • Upgrade to 2.8-head
  • Upgrade the downstream from 1.26 to 1.27

I’ll start looking into this in the meantime (since this will likely turn out the same way), but I want to make sure that this isn’t an artifact of your test env that won’t be present in production.