kubernetes: 'Services should be able to preserve UDP traffic when server pod cycles for a NodePort service' fails if conntrack is not installed on node

Which jobs are flaking: test_pull_request_crio_e2e_crun_fedora and test_pull_request_crio_e2e_fedora

Which test(s) are flaking: Services should be able to preserve UDP traffic when server pod cycles for a NodePort service

Testgrid link: https://deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gcs/origin-federated-results/pr-logs/pull/cri-o_cri-o/3775/test_pull_request_crio_e2e_fedora/13145/

Reason for failure:

/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1243
May 18 15:24:04.040: pod communication failed
Unexpected error:
    <*errors.errorString | 0xc003f5a060>: {
        s: "Too many Errors in continuous echo",
    }
    Too many Errors in continuous echo
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1302

Anything else we need to know:

  • links to go.k8s.io/triage appreciated
  • links to specific failures in spyglass appreciated

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 23 (22 by maintainers)

Most upvoted comments

kube-proxy itself has already depended on conntrack being installed for a long time. I was assuming that the problem here was that the test case was invoking conntrack directly in some context other than where kube-proxy is running?

If the problem is that the crio tests were running kube-proxy directly on nodes that didn’t have conntrack installed, then yes, that’s a crio ci bug, I guess, but I would have expected other tests to be failing already in that case, since kube-proxy already uses conntrack for other things… More generally, if kube-proxy needs conntrack to function correctly, then it should probably fail more noticeably if conntrack isn’t installed.

I can close this once https://github.com/cri-o/cri-o/pull/3787 is merged.