/home/jenkins/workspace/Ginkgo-CI-Tests-k8s1.11-Pipeline/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:514
Can not connect to service "http://192.168.37.11:30146" from outside cluster
Expected command: kubectl exec -n kube-system log-gatherer-rlhxl -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.37.11:30146 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Err: exit status 28
Stdout:
time-> DNS: '0.000023()', Connect: '0.000000',Transfer '0.000000', total '5.001665'
Stderr:
command terminated with exit code 28
/home/jenkins/workspace/Ginkgo-CI-Tests-k8s1.11-Pipeline/src/github.com/cilium/cilium/test/k8sT/Services.go:514
https://jenkins.cilium.io/job/Ginkgo-CI-Tests-k8s1.11-Pipeline/727/testReport/junit/Suite-k8s-1/11/K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_direct_routing_Tests_with_secondary_NodePort_device/
https://jenkins.cilium.io/job/Ginkgo-CI-Tests-k8s1.11-Pipeline/724/testReport/junit/Suite-k8s-1/11/K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_direct_routing_Tests_with_secondary_NodePort_device/
@bimmlerd I would suggest to open a new flake issue. This new occurrence likely has a different root cause because (1) Martynas identified the root cause of the previous flake and fixed it and (2) it hasn’t been seen in more than a year.
@joestringer Actually I’ve been working on preparing how to trace down what’s going on in the suite. Going to troubleshoot it after the v1.11 release.
This started to happen more often since 7. / 8. Dec.
Two suspects for that period of time (https://github.com/cilium/cilium/pulls?q=is%3Apr+is%3Aclosed+merged%3A2020-12-06..2020-12-08):