kubernetes: Flake: [sig-network] Network should set TCP CLOSE_WAIT timeout
Which jobs are failing: pull-kubernetes-e2e-kops-aws
Which test(s) are failing: [sig-network] Network should set TCP CLOSE_WAIT timeout Test (that’s what the name appears as, I don’t have any more info)
Since when has it been failing: 11-28-2018
Testgrid link: https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/71479/pull-kubernetes-e2e-kops-aws/115150/ https://testgrid.k8s.io/presubmits-kubernetes-blocking#pull-kubernetes-e2e-kops-aws&include-filter-by-regex=^Overall%24|Test
Reason for failure:
[sig-network] Network should set TCP CLOSE_WAIT timeout
Expected error:
<*strconv.NumError | 0xc001812c90>: {
Func: "Atoi",
Num: "",
Err: {s: "invalid syntax"},
}
strconv.Atoi: parsing "": invalid syntax
not to have occurred
test/e2e/network/kube_proxy.go:204
Test
error during ./hack/ginkgo-e2e.sh --ginkgo.flakeAttempts=2 --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|\[HPA\]|Dashboard|Services.*functioning.*NodePort --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
Anything else we need to know: Filing this as a flake as subsequent test executions succeeded.
/kind failing-test /kind flake
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 23 (16 by maintainers)
I’m going to close this because there are no errors on the
pull-kubernetes-e2e-gce-*jobs in the last 5 days, and only one error in theci-kubernetes-e2e-gci-gce-*that I can’t explain, because theoretically, the loop to find the close_wait connection has to run every second during 30 seconds and, in that failure, it only runs 3 times.There are other errors of this test with the output
pod ran to completion, but those jobs have a considerable amount of test failed (>100), so I’m concerned if that can be something environmental causing these failures./close
feel free to reopen
They all pull from
ci/latestJust to use the latest failure listed on triage as an example: https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-coredns-nodecache/1252809431276589059
v1.19.0-alpha.2.8+85ee5fdd901619
Looking at the commits in #90278 (eg: https://github.com/kubernetes/kubernetes/pull/90278/commits/638e077b4b93753a97315dcfb9c88f424d33f333) I see that it landed before v1.19.0-alpha.2, so your fix is in there