kubernetes: flakes: [sig-network] Service endpoints latency should not be very high [Conformance]

What happened: The Conformance test [sig-network] Service endpoints latency should not be very high [Conformance] flakes.

What you expected to happen: Conformance tests should not be flaky

How to reproduce it (as minimally and precisely as possible): Run the test against various clusters.

Anything else we need to know?: AFAICT the test is essentially testing the performance of the cluster’s network? It also fails if even a single probe flakes [1]. I’m not sure that the former is reasonable for conformance, but the latter seems especially unreasonable.

[1] https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/sigs.k8s.io_kind/318/pull-kind-conformance-parallel/488/#sig-network-service-endpoints-latency-should-not-be-very-high-conformance

/area conformance cc @spiffxp @dims

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 22 (21 by maintainers)

Most upvoted comments

+1 that this test shouldn’t fail for a single probe flake. Sent https://github.com/kubernetes/kubernetes/pull/74538 to hopefully deflake that case.

AFAICT this is resolved, thanks @MrHohn!

there are still some failures in triage with this, but they seem to be on runs with many failures like https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-new-gci-master-upgrade-cluster-parallel/1117379878698618883

@freehan - I agree with @mm4tt - it’s highly unlikely that those SLI-related changes would introduce such flakiness.

@spiffxp FWIW I think this is a bug in the test, not any other flakiness. It’s unreasonable to not expect the network to flake at all?

This is fair, but I still would prefer we use kind/flake to identify these kinds of issues. Whether the source of the flake is the environment, the feature, or the test, it’s still something to do with flakiness.