k9s: K9s running very slowly when opening Secrets in namespace with lots of secrets




Describe the bug K9s slows down to the point where it is unusable when opening Secrets in a namespace with lots of secrets.

I have a namespace with 163 secrets. Most of them are from Helm, tracking deployment versions. Opening that namespace and navigating to secrets slows K9s down so much that it is unusable. I have to terminate the terminal window and open a new one.

To Reproduce Steps to reproduce the behavior:

  1. Open K9s
  2. Navigate to the namespace you want (in my case, I press 2)
  3. SHIFT+colon
  4. sec
  5. ENTER The list of secrets appear, but K9s is too slow to be useful anymore

Expected behavior K9s doesn’t slow down.

Screenshots If applicable, add screenshots to help explain your problem.

A video would be more useful, but I would need to redact a significant amount. If I have time this evening, I’ll see if I can reproduce it on my cluster at home with something fake.

Versions (please complete the following information):

  • OS: MacOS Mojave 10.14.6
  • K9s v0.7.12
  • K8s v1.12.7

Additional context I’m on a corporate-managed laptop with antivirus and firewall junk so if nobody is able to reproduce that may be it, but I hope not…

Seems like ~50 secrets is when K9s starts to get bogged down a little, and towards ~100 secrets it starts getting really slow.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 4
  • Comments: 22 (10 by maintainers)

Most upvoted comments

I think I have found the root cause of this!

When we make a request via client-go for structured data, we get the full object. In other words, when we do a rr, err := s.DialOrDie().CoreV1().Secrets(ns).List(opts) we fetch not only the name of all secrets, but all the data the is stored in the secret as well.

That began to be noticeable with few hundred of objects, especially with helm objects, these could contains big base64/gzipped strings. Which makes we fetch a huge amount of data from kubernetes and is very expensive to parse out this data.

If we fetch these (config maps and secrets) with the unstructured api, the same we use to fetch CRDs, we could only get the tabular data + metadata.

In my cluster, this reduced the load time of 5700 config maps from 17-20 seconds down to 2-3 secs.

@derailed do you think this is a reasonable solution?

There is a very raw version of this in my branch if anyone is willing to test it out.

It works like a charm now. Thank you @paivagustavo!

Also noticed huge lags in configmaps section, only ~250 records there though, seems like internet connection speed somehow related to it, with a fast internet connection it works faster.

I changed the refresh to 30, and it is smooth for 30 seconds, then freezes for 5 seconds, then is smooth again, so I think we are on to something.