kubernetes: Kube-proxy setting (with proxy-mode being ipvs) ipvs.udpTimeout doesn't work
What happened?
#84041 Introduced a way to control timeouts for UDP traffic when kube-proxy works in IPVS mode.
The setting simply doesn’t work, making the actual timeout value get inherited from the underlying system configuration net.netfilter.nf_conntrack_udp_timeout
(being 30sec by default).
What did you expect to happen?
The setting for kube-proxy should ‘beat’ the system value.
How can we reproduce it (as minimally and precisely as possible)?
Via kube-proxy configmap set mode: ipvs
and ipvs.udpTimeout: 300s
.
Restart kube-proxy DaemonSet for settings to supposedly take effect.
Test by sending UDP traffic via kubernetes Service and observe the actual timeouts in conntrack table.
Anything else we need to know?
The mechanism seems to be majorly broken: ipvs.syncPeriod
option seems to wipe all entries about related UDP connections regardless of their current timeout timer value in the conntrack table!
I.e. a connection with, say, 20s of time left before timeout occurs - will get wiped upon kube-proxy’s sync!
Kubernetes version
$ kubectl version
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.5", GitCommit:"804d6167111f6858541cef440ccc53887fbbc96a", GitTreeState:"clean", BuildDate:"2022-12-08T10:08:09Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl version
Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.3", GitCommit:"9e644106593f3f4aa98f8a84b23db5fa378900bd", GitTreeState:"clean", BuildDate:"2023-03-15T13:33:12Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider
OS version
$ cat /etc/os-release
CentOS Linux release 7.9.2009 (Core)
$ uname -a
Linux 5.10.6-1.el7.elrepo.x86_64
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, …) and versions (if applicable)
About this issue
- Original URL
- State: closed
- Created 10 months ago
- Reactions: 1
- Comments: 20 (14 by maintainers)
@Drugoy feel free to open an issue for adding conntrack.UDPTimeout configuration.
@Drugoy IPVS maintains an internal connection table and the parameters ipvs.[tcpFinTimeout|tcpTimeout|udpTimeout] update IPVS connection timeouts not netfilter conntrack timeouts.
After restarting kube-proxy, you can watch
ipvsadm --list --connection
to see the new timeout values being used by IPVS. The ipvs connection table is kind of an optimisation for using the same connection for forwarding the traffic to ipvs destinations./cc. @uablrek