kubernetes: [Flaky Test] Kubernetes e2e suite: [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]

Which jobs are flaking:

ci-kubernetes-kind-ipv6-e2e-parallel #1367504705533513728

Which test(s) are flaking:

Kubernetes e2e suite: [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]

Testgrid link:

https://testgrid.k8s.io/sig-release-master-blocking#kind-ipv6-master-parallel&sort-by-failures=&exclude-non-failed-tests=

Reason for failure:

/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
Mar  4 16:31:32.747: no valid conntrack entry for port 11302 on node fc00:f853:ccd:e793::4: timed out waiting for the condition
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

Anything else we need to know:

/sig network

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 1
  • Comments: 19 (19 by maintainers)

Most upvoted comments

I fixed this recently https://github.com/kubernetes/kubernetes/pull/99564 The test is running in another jobs without failures since this fix, is still soon to declare it as flake, the job linked here has a very high rate of failures with other tests failing, will keep an eye on it