rancher: Memory leak in v2.4.2 with no clusters or nodes

What kind of request is this (question/bug/enhancement/feature request): Bug

Steps to reproduce (least amount of steps as possible):

  1. Installed Rancher v2.3.5 on a two node k3s cluster with MySQL backend (2020-03-26)
  2. Upgrade to Rancher v2.4.2 (2020-04-02)

Result: On both k3s nodes, the Rancher process/pod memory utilization is growing about 800MB a day and currently over 3GB:

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root      6029  4.3 44.9 3741456 3574740 ?     Sl   Apr02 426:45 rancher --http-listen-port=80 --https-listen-port=443 --audit-log-path=/var/log/auditlog/rancher-api-audit.log --audit-level=0 --audit-log-maxage=10 --audit-log-maxbackup=10 --audit-log-maxsize=100 --no-cacerts --http-listen-port=80 --https-listen-port=443 --add-local=off
root     23692  4.7 46.5 3810068 3662124 ?     Sl   Apr02 472:49 rancher --http-listen-port=80 --https-listen-port=443 --audit-log-path=/var/log/auditlog/rancher-api-audit.log --audit-level=0 --audit-log-maxage=10 --audit-log-maxbackup=10 --audit-log-maxsize=100 --no-cacerts --http-listen-port=80 --https-listen-port=443 --add-local=off

kubectl top pods output:

NAMESPACE                   NAME                                                      CPU(cores)   MEMORY(bytes)   
demo-hosted                 rancher-75f5896bd-8jxv7                                   4m           3347Mi          
demo-hosted                 rancher-75f5896bd-kc75f                                   3m           3445Mi          

Memory growth over last 24 hours: image

Other details that may be helpful: No clusters or nodes have been added to this cluster.

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): v2.3.5 -> v2.4.2
  • Installation option (single install/HA): HA w/ k3s & MySQL

Cluster information

  • Cluster type (Hosted/Infrastructure Provider/Custom/Imported): K3s
  • Machine type (cloud/VM/metal) and specifications (CPU/memory): m5.large on AWS (2vCPU, 8GB RAM)
  • Kubernetes version (use kubectl version):
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.8", GitCommit:"ec6eb119b81be488b030e849b9e64fda4caaf33c", GitTreeState:"clean", BuildDate:"2020-03-12T21:00:06Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4+k3s1", GitCommit:"3eee8ac3a1cf0a216c8a660571329d4bda3bdf77", GitTreeState:"clean", BuildDate:"2020-03-25T16:13:25Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version (use docker version): Using containerd

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 18 (13 by maintainers)

Most upvoted comments

@aaronRancher The best way to test this is to have a setup and just let it sit. With the agent you should be able to see the memory climb within the hour, but seeing the rancher leak takes more time.