kubectl: kubectl delete leaks network connections when deleting multiple resources, causing warnings or errors

What happened:

kubectl delete leaks network connections when deleting multiple resources, causing warnings or errors

What you expected to happen:

kubectl to not leak network connections

How to reproduce it (as minimally and precisely as possible):

  1. Create 1001+ pods
  2. Perform kubectl delete pod -l mypod=foo

Receive warning (Linux):

W1011 16:39:05.007295 3346 exec.go:282] constructing many client instances from the same exec auth config can cause performance problems during cert rotation and can exhaust available network connections; 1001 clients constructed calling “aws”

Receive error (macOS):

socket: too many open files

Anything else we need to know?:

Environment:

  • Kubernetes client and server versions (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"archive", BuildDate:"2021-03-30T00:00:00Z", GoVersion:"go1.16", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.2-eks-0389ca3", GitCommit:"8a4e27b9d88142bbdd21b997b532eb6d493df6d2", GitTreeState:"clean", BuildDate:"2021-07-31T01:34:46Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
    
  • Cloud provider or hardware configuration: AWS EKS, authenticated to via AWS SSO.
  • OS (e.g: cat /etc/os-release): Linux and macOS

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 21 (17 by maintainers)

Most upvoted comments

kubectl should definitely stop making an infinite number of clients, but until it does, kubernetes/kubernetes#112017 should prevent connection leaks.

This fix is merged and backported. The next release of kubectl should include the fix.