kubernetes: Increase in apiserver/c-m mem usage by 100-200 MB in 100-node cluster

Starting sometime around yesterday night, we’re observing a significant increase in apiserver memory and CPU usage. There was a sharp increase across runs 11168 and 11169 of our 100-node performance test:

from:

{
      "Name": "kube-apiserver-e2e-big-master/kube-apiserver",
      "Cpu": 0.909674846,
      "Mem": 830136320
},

to:

{
      "Name": "kube-apiserver-e2e-big-master/kube-apiserver",
      "Cpu": 1.447254278,
      "Mem": 2031329280
},

From the diff, I’m almost sure it’s https://github.com/kubernetes/kubernetes/pull/60076 which is introducing buffering to audit-logging. I’ll try to confirm this.

/assign /cc @kubernetes/sig-scalability-bugs @wojtek-t

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 58 (45 by maintainers)

Commits related to this issue

Most upvoted comments

@shyamjvs

Could you confirm that and also how much bound wrt memory can we expect, so we can change our thresholds?

Depends on the request sizes. Assuming each request in ~2.5KB (something I observed in e2e correctness tests), that’s at least +25MB for buffering. Accounted for the different size internally, gc and the fact that a copy of audit logging request is stored in context for each request, +100-200 MB sounds reasonable.