kubernetes: Using exec auth in kubectl triggers one connection per applied/described object
What happened?
Switching from the gcp auth provider to the exec auth provider in 1.25 made kubectl apply requests which previously made a single connection to the API server make one connection per applied object.
If enough items are applied, the connections can start to fail because of ephemeral port exhaustion
/kind bug /milestone v1.25 /sig auth cli api-machinery
What did you expect to happen?
Switching to the exec auth plugin would allow commands that previously worked to continue to work
How can we reproduce it (as minimally and precisely as possible)?
Reproducer integration test at https://github.com/liggitt/kubernetes/commits/exec-auth-reproducer demonstrating one transport (and therefore a new connection) for each item in an applied file when using exec auth
on that branch:
go test ./test/integration/client/ -run TestApplyMultipleItemsWithExecPlugin -v
and observe 10 transport creations to apply 10 items (plus a couple others for discovery):
before apply
creating transport
creating transport
creating transport
creating transport
creating transport
creating transport
creating transport
creating transport
creating transport
creating transport
creating transport
creating transport
after apply
Anything else we need to know?
This happens because exec auth setup unconditionally sets the GetCert field in transport config here (even if the plugin is only going to return token credentials):
That makes the transport config uncacheable here:
Commands like kubectl describe and kubectl apply repeatedly construct transports when dealing with multiple objects. This poor behavior was masked in past configurations by the transports being cached.
Kubernetes version
tried with 1.24 and 1.25, reproduced with both
Cloud provider
n/a
OS version
n/a
Install tools
n/a
Container runtime (CRI) and version (if applicable)
n/a
Related plugins (CNI, CSI, …) and versions (if applicable)
n/a
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 22 (22 by maintainers)
I have opened #112017 to fix this bug. I believe the change is small and safe enough to backport.
I can try to improve https://github.com/kubernetes/kubernetes/pull/108459 next week and make it safer for backports
it looks like the thing in kubectl that is repeatedly constructing clients is the Builder… there might be a central way to make that use the “construct http.Client once, reuse for multiple clientsets” that @aojea made last release… I’m looking into that now
me too, that doesn’t really seem like a fix