rancher: Rancher reporting cpu/mem reserved and pod count wrong

Rancher Server Setup

  • Rancher version: 2.6.3
  • Installation option (Docker install/Helm Chart): Helm
    • If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): RK2
  • Proxy/Cert Details:

Information about the Cluster

  • Kubernetes version: v1.21.5+rke2r1 / v1.21.7+rke2r2
  • Cluster Type (Local/Downstream): Downstream
    • If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider): Imported RKE2

Describe the bug I have three more or less empty clusters deployed with RKE2, only one of them seems to be reporting a correct value for reserved cpu/mem. And pods seem to be missing too, but only in the rancher home screen…

image

As you can see rke2-downstream2 cluster is reporting no reservations at all and test-rk2 seems to report too much reservations for an empty cluster

This is the output of the resource-capacity plugin if krew: rke2-downstream2:

❯ kubectl resource-capacity
NODE                        CPU REQUESTS   CPU LIMITS    MEMORY REQUESTS   MEMORY LIMITS
*                           8020m (44%)    4120m (22%)   2387Mi (3%)       5323Mi (7%)
rke2-downstream2-agent-1    900m (22%)     1300m (32%)   284Mi (1%)        630Mi (3%)
rke2-downstream2-agent-2    1850m (46%)    1700m (42%)   1341Mi (7%)       4119Mi (24%)
rke2-downstream2-agent-3    820m (20%)     420m (10%)    252Mi (1%)        284Mi (1%)
rke2-downstream2-server-1   1450m (72%)    200m (10%)    126Mi (1%)        53Mi (0%)
rke2-downstream2-server-2   1450m (72%)    200m (10%)    126Mi (1%)        53Mi (0%)
rke2-downstream2-server-3   1550m (77%)    300m (15%)    261Mi (3%)        187Mi (2%)

and test-rke2:

❯ kubectl resource-capacity
NODE                 CPU REQUESTS   CPU LIMITS   MEMORY REQUESTS   MEMORY LIMITS
*                    6070m (33%)    220m (1%)    856Mi (1%)        290Mi (0%)
rke2-agent-node-1    600m (15%)     0Mi (0%)     95Mi (0%)         0Mi (0%)
rke2-agent-node-2    600m (15%)     0Mi (0%)     95Mi (0%)         0Mi (0%)
rke2-agent-node-3    600m (15%)     0Mi (0%)     95Mi (0%)         0Mi (0%)
rke2-server-node-1   1570m (78%)    220m (11%)   384Mi (4%)        290Mi (3%)
rke2-server-node-2   1350m (67%)    0Mi (0%)     95Mi (1%)         0Mi (0%)
rke2-server-node-3   1350m (67%)    0Mi (0%)     95Mi (1%)         0Mi (0%)

All server nodes are 2 CPU and agent nodes 4 CPU vms btw.

rke2-downstream2 has monitoring installed, test-rke2 has not

image image

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 12
  • Comments: 24 (5 by maintainers)

Most upvoted comments

https://rancher-addreess.domain.com/v1/management.cattle.io.cluster is returning all zeroes for the clusters in question so its not an display issue…

image

In our case right now the memory might be right but the “max memory” is wrong. We have more then 23GB of ram.

image

I have a similar issue, except it’s not showing 0%, nor is the effect anywhere but in the cluster that was upgraded to 1.22 (vs the others 1.21). Lens-IDE shows all the values correctly.

Rancher:

image

vs Lens:

image