cilium: CI: K8sServicesTest Checks service across nodes Supports IPv4 Fragments: Failed to account for IPv4 fragments
K8sServicesTest Checks service across nodes Supports IPv4 Fragments
Failed with:
18:45:16 • Failure [65.024 seconds]
18:45:16 K8sServicesTest
18:45:16 /home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:395
18:45:16 Checks service across nodes
18:45:16 /home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:395
18:45:16 Supports IPv4 fragments [It]
18:45:16 /home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:430
18:45:16
18:45:16 Failed to account for IPv4 fragments (in)
[2020-04-10T01:45:16.250Z] Expected
[2020-04-10T01:45:16.250Z] <[]int | len:2, cap:2>: [21, 20]
[2020-04-10T01:45:16.250Z] To satisfy at least one of these matchers: [%!s(*matchers.EqualMatcher=&{[16 24]}) %!s(*matchers.EqualMatcher=&{[20 20]})]
Seen here: https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Validated/18664/ –> https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Validated/18664/execution/node/198/log/?consoleFull PR #10910
Dump: 3db90426_K8sServicesTest_Checks_service_across_nodes_Supports_IPv4_fragments.zip
18:44:10 ------------------------------
18:44:10 K8sServicesTest Checks service across nodes
18:44:10 Supports IPv4 fragments
18:44:10 /home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:430
18:44:12 STEP: Sending a fragmented packet from testclient-jtsw2 to endpoint 10.109.193.120:10069
18:44:17 STEP: Sending a fragmented packet from testclient-jtsw2 to endpoint 192.168.36.11:31165
18:44:21 STEP: Sending a fragmented packet from testclient-jtsw2 to endpoint ::ffff:192.168.36.11:31165
18:44:26 STEP: Sending a fragmented packet from testclient-jtsw2 to endpoint 192.168.36.12:31165
18:44:31 STEP: Sending a fragmented packet from testclient-jtsw2 to endpoint ::ffff:192.168.36.12:31165
18:44:35 STEP: Sending a fragmented packet from testclient-jtsw2 to endpoint 127.0.0.1:31165
18:44:40 STEP: Sending a fragmented packet from testclient-jtsw2 to endpoint ::ffff:127.0.0.1:31165
18:44:45 STEP: Sending a fragmented packet from testclient-jtsw2 to endpoint 10.10.0.137:31165
18:44:49 STEP: Sending a fragmented packet from testclient-jtsw2 to endpoint ::ffff:10.10.0.137:31165
18:44:54 STEP: Sending a fragmented packet from testclient-jtsw2 to endpoint 10.10.1.184:31165
18:44:56 === Test Finished at 2020-04-10T01:44:56Z====
18:44:56 ===================== TEST FAILED =====================
18:44:57 cmd: kubectl get pods -o wide --all-namespaces
18:44:57 Exitcode: 0
18:44:57 Stdout:
18:44:57 NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
18:44:57 default test-k8s2-848b6f7864-wtdsq 2/2 Running 0 19m 10.10.1.121 k8s2 <none>
18:44:57 default testclient-jtsw2 1/1 Running 0 19m 10.10.0.59 k8s1 <none>
18:44:57 default testclient-kbt2w 1/1 Running 0 19m 10.10.1.38 k8s2 <none>
18:44:57 default testds-fh7kp 2/2 Running 0 19m 10.10.1.11 k8s2 <none>
18:44:57 default testds-l22kb 2/2 Running 0 19m 10.10.0.34 k8s1 <none>
18:44:57 kube-system cilium-nws8f 1/1 Running 0 1m 192.168.36.11 k8s1 <none>
18:44:57 kube-system cilium-operator-645465fc56-cbqfc 1/1 Running 0 1m 192.168.36.12 k8s2 <none>
18:44:57 kube-system cilium-tmz5l 1/1 Running 0 1m 192.168.36.12 k8s2 <none>
18:44:57 kube-system coredns-687db6485c-scn4s 1/1 Running 0 1m 10.10.1.60 k8s2 <none>
18:44:57 kube-system etcd-k8s1 1/1 Running 0 27m 192.168.36.11 k8s1 <none>
18:44:57 kube-system kube-apiserver-k8s1 1/1 Running 0 27m 192.168.36.11 k8s1 <none>
18:44:57 kube-system kube-controller-manager-k8s1 1/1 Running 0 27m 192.168.36.11 k8s1 <none>
18:44:57 kube-system kube-scheduler-k8s1 1/1 Running 0 27m 192.168.36.11 k8s1 <none>
18:44:57 kube-system log-gatherer-cbhxj 1/1 Running 0 24m 192.168.36.12 k8s2 <none>
18:44:57 kube-system log-gatherer-txst6 1/1 Running 0 24m 192.168.36.13 k8s3 <none>
18:44:57 kube-system log-gatherer-vjdqj 1/1 Running 0 24m 192.168.36.11 k8s1 <none>
18:44:57 kube-system registry-adder-4qf7h 1/1 Running 0 25m 192.168.36.11 k8s1 <none>
18:44:57 kube-system registry-adder-gd5tg 1/1 Running 0 25m 192.168.36.12 k8s2 <none>
18:44:57 kube-system registry-adder-zbmv8 1/1 Running 0 25m 192.168.36.13 k8s3 <none>
18:44:57
18:44:57 Stderr:
18:44:57
18:44:57
18:44:57 Fetching command output from pods [cilium-nws8f cilium-tmz5l]
18:45:16 cmd: kubectl exec -n kube-system cilium-nws8f -- cilium service list
18:45:16 Exitcode: 0
18:45:16 Stdout:
18:45:16 ID Frontend Service Type Backend
18:45:16 1 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
18:45:16 2 10.96.0.10:53 ClusterIP 1 => 10.10.1.60:53
18:45:16 3 192.168.36.240:80 LoadBalancer 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 4 192.168.36.241:80 LoadBalancer
18:45:16 19 10.107.95.129:80 ClusterIP 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 20 10.107.95.129:69 ClusterIP 1 => 10.10.0.34:69
18:45:16 2 => 10.10.1.11:69
18:45:16 21 10.109.193.120:10080 ClusterIP 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 22 10.109.193.120:10069 ClusterIP 1 => 10.10.0.34:69
18:45:16 2 => 10.10.1.11:69
18:45:16 23 0.0.0.0:31129 NodePort 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 24 192.168.36.11:31129 NodePort 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 25 10.10.0.137:31129 NodePort 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 26 10.10.0.137:31165 NodePort 1 => 10.10.0.34:69
18:45:16 2 => 10.10.1.11:69
18:45:16 27 0.0.0.0:31165 NodePort 1 => 10.10.0.34:69
18:45:16 2 => 10.10.1.11:69
18:45:16 28 192.168.36.11:31165 NodePort 1 => 10.10.0.34:69
18:45:16 2 => 10.10.1.11:69
18:45:16 29 10.104.253.7:10080 ClusterIP 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 30 10.104.253.7:10069 ClusterIP 1 => 10.10.0.34:69
18:45:16 2 => 10.10.1.11:69
18:45:16 31 0.0.0.0:31440 NodePort 1 => 10.10.0.34:80
18:45:16 32 192.168.36.11:31440 NodePort 1 => 10.10.0.34:80
18:45:16 33 10.10.0.137:31440 NodePort 1 => 10.10.0.34:80
18:45:16 34 0.0.0.0:31699 NodePort 1 => 10.10.0.34:69
18:45:16 35 192.168.36.11:31699 NodePort 1 => 10.10.0.34:69
18:45:16 36 10.10.0.137:31699 NodePort 1 => 10.10.0.34:69
18:45:16 37 10.98.104.99:10080 ClusterIP 1 => 10.10.1.121:80
18:45:16 38 10.98.104.99:10069 ClusterIP 1 => 10.10.1.121:69
18:45:16 39 0.0.0.0:32096 NodePort
18:45:16 40 192.168.36.11:32096 NodePort
18:45:16 41 10.10.0.137:32096 NodePort
18:45:16 42 0.0.0.0:30340 NodePort
18:45:16 43 192.168.36.11:30340 NodePort
18:45:16 44 10.10.0.137:30340 NodePort
18:45:16 45 10.108.142.75:10080 ClusterIP 1 => 10.10.1.121:80
18:45:16 46 10.108.142.75:10069 ClusterIP 1 => 10.10.1.121:69
18:45:16 47 0.0.0.0:32658 NodePort 1 => 10.10.1.121:80
18:45:16 48 192.168.36.11:32658 NodePort 1 => 10.10.1.121:80
18:45:16 49 10.10.0.137:32658 NodePort 1 => 10.10.1.121:80
18:45:16 50 0.0.0.0:30795 NodePort 1 => 10.10.1.121:69
18:45:16 51 192.168.36.11:30795 NodePort 1 => 10.10.1.121:69
18:45:16 52 10.10.0.137:30795 NodePort 1 => 10.10.1.121:69
18:45:16 53 10.109.125.148:80 ClusterIP 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 54 0.0.0.0:31767 NodePort 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 55 192.168.36.11:31767 NodePort 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 56 10.10.0.137:31767 NodePort 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 57 10.109.100.175:80 ClusterIP 1 => 10.10.1.121:80
18:45:16 58 0.0.0.0:30574 NodePort
18:45:16 59 192.168.36.11:30574 NodePort
18:45:16 60 10.10.0.137:30574 NodePort
18:45:16 61 192.168.36.12:8080 HostPort 1 => 10.10.1.121:80
18:45:16 62 192.168.36.12:6969 HostPort 1 => 10.10.1.121:69
18:45:16
18:45:16 Stderr:
18:45:16
18:45:16
18:45:16 cmd: kubectl exec -n kube-system cilium-nws8f -- cilium endpoint list
18:45:16 Exitcode: 0
18:45:16 Stdout:
18:45:16 ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
18:45:16 ENFORCEMENT ENFORCEMENT
18:45:16 2043 Disabled Disabled 4 reserved:health f00d::a0b:0:0:789 10.10.0.71 ready
18:45:16 3197 Disabled Disabled 16933 k8s:io.cilium.k8s.policy.cluster=default f00d::a0b:0:0:615b 10.10.0.34 ready
18:45:16 k8s:io.cilium.k8s.policy.serviceaccount=default
18:45:16 k8s:io.kubernetes.pod.namespace=default
18:45:16 k8s:zgroup=testDS
18:45:16 3646 Disabled Disabled 2250 k8s:io.cilium.k8s.policy.cluster=default f00d::a0b:0:0:abb8 10.10.0.59 ready
18:45:16 k8s:io.cilium.k8s.policy.serviceaccount=default
18:45:16 k8s:io.kubernetes.pod.namespace=default
18:45:16 k8s:zgroup=testDSClient
18:45:16
18:45:16 Stderr:
18:45:16
18:45:16
18:45:16 cmd: kubectl exec -n kube-system cilium-tmz5l -- cilium service list
18:45:16 Exitcode: 0
18:45:16 Stdout:
18:45:16 ID Frontend Service Type Backend
18:45:16 1 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
18:45:16 2 10.96.0.10:53 ClusterIP 1 => 10.10.1.60:53
18:45:16 3 192.168.36.240:80 LoadBalancer 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 4 192.168.36.241:80 LoadBalancer 1 => 10.10.1.121:80
18:45:16 19 10.107.95.129:80 ClusterIP 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 20 10.107.95.129:69 ClusterIP 1 => 10.10.0.34:69
18:45:16 2 => 10.10.1.11:69
18:45:16 21 10.109.193.120:10080 ClusterIP 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 22 10.109.193.120:10069 ClusterIP 1 => 10.10.0.34:69
18:45:16 2 => 10.10.1.11:69
18:45:16 23 0.0.0.0:31129 NodePort 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 24 192.168.36.12:31129 NodePort 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 25 10.10.1.184:31129 NodePort 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 26 0.0.0.0:31165 NodePort 1 => 10.10.0.34:69
18:45:16 2 => 10.10.1.11:69
18:45:16 27 192.168.36.12:31165 NodePort 1 => 10.10.0.34:69
18:45:16 2 => 10.10.1.11:69
18:45:16 28 10.10.1.184:31165 NodePort 1 => 10.10.0.34:69
18:45:16 2 => 10.10.1.11:69
18:45:16 29 10.104.253.7:10069 ClusterIP 1 => 10.10.0.34:69
18:45:16 2 => 10.10.1.11:69
18:45:16 30 10.104.253.7:10080 ClusterIP 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 31 0.0.0.0:31699 NodePort 1 => 10.10.1.11:69
18:45:16 32 192.168.36.12:31699 NodePort 1 => 10.10.1.11:69
18:45:16 33 10.10.1.184:31699 NodePort 1 => 10.10.1.11:69
18:45:16 34 192.168.36.12:31440 NodePort 1 => 10.10.1.11:80
18:45:16 35 10.10.1.184:31440 NodePort 1 => 10.10.1.11:80
18:45:16 36 0.0.0.0:31440 NodePort 1 => 10.10.1.11:80
18:45:16 37 10.98.104.99:10080 ClusterIP 1 => 10.10.1.121:80
18:45:16 38 10.98.104.99:10069 ClusterIP 1 => 10.10.1.121:69
18:45:16 39 0.0.0.0:32096 NodePort 1 => 10.10.1.121:80
18:45:16 40 192.168.36.12:32096 NodePort 1 => 10.10.1.121:80
18:45:16 41 10.10.1.184:32096 NodePort 1 => 10.10.1.121:80
18:45:16 42 0.0.0.0:30340 NodePort 1 => 10.10.1.121:69
18:45:16 43 192.168.36.12:30340 NodePort 1 => 10.10.1.121:69
18:45:16 44 10.10.1.184:30340 NodePort 1 => 10.10.1.121:69
18:45:16 45 10.108.142.75:10080 ClusterIP 1 => 10.10.1.121:80
18:45:16 46 10.108.142.75:10069 ClusterIP 1 => 10.10.1.121:69
18:45:16 47 192.168.36.12:32658 NodePort 1 => 10.10.1.121:80
18:45:16 48 10.10.1.184:32658 NodePort 1 => 10.10.1.121:80
18:45:16 49 0.0.0.0:32658 NodePort 1 => 10.10.1.121:80
18:45:16 50 0.0.0.0:30795 NodePort 1 => 10.10.1.121:69
18:45:16 51 192.168.36.12:30795 NodePort 1 => 10.10.1.121:69
18:45:16 52 10.10.1.184:30795 NodePort 1 => 10.10.1.121:69
18:45:16 53 10.109.125.148:80 ClusterIP 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 54 0.0.0.0:31767 NodePort 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 55 192.168.36.12:31767 NodePort 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 56 10.10.1.184:31767 NodePort 1 => 10.10.0.34:80
18:45:16 2 => 10.10.1.11:80
18:45:16 57 10.109.100.175:80 ClusterIP 1 => 10.10.1.121:80
18:45:16 58 0.0.0.0:30574 NodePort 1 => 10.10.1.121:80
18:45:16 59 192.168.36.12:30574 NodePort 1 => 10.10.1.121:80
18:45:16 60 10.10.1.184:30574 NodePort 1 => 10.10.1.121:80
18:45:16 61 192.168.36.12:8080 HostPort 1 => 10.10.1.121:80
18:45:16 62 192.168.36.12:6969 HostPort 1 => 10.10.1.121:69
18:45:16
18:45:16 Stderr:
18:45:16
18:45:16
18:45:16 cmd: kubectl exec -n kube-system cilium-tmz5l -- cilium endpoint list
18:45:16 Exitcode: 0
18:45:16 Stdout:
18:45:16 ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
18:45:16 ENFORCEMENT ENFORCEMENT
18:45:16 453 Disabled Disabled 6937 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:e6e9 10.10.1.60 ready
18:45:16 k8s:io.cilium.k8s.policy.serviceaccount=coredns
18:45:16 k8s:io.kubernetes.pod.namespace=kube-system
18:45:16 k8s:k8s-app=kube-dns
18:45:16 571 Disabled Disabled 16933 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:aec6 10.10.1.11 ready
18:45:16 k8s:io.cilium.k8s.policy.serviceaccount=default
18:45:16 k8s:io.kubernetes.pod.namespace=default
18:45:16 k8s:zgroup=testDS
18:45:16 864 Disabled Disabled 4 reserved:health f00d::a0c:0:0:c3eb 10.10.1.170 ready
18:45:16 2544 Disabled Disabled 18702 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:d20b 10.10.1.121 ready
18:45:16 k8s:io.cilium.k8s.policy.serviceaccount=default
18:45:16 k8s:io.kubernetes.pod.namespace=default
18:45:16 k8s:zgroup=test-k8s2
18:45:16 3376 Disabled Disabled 2250 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:7d8a 10.10.1.38 ready
18:45:16 k8s:io.cilium.k8s.policy.serviceaccount=default
18:45:16 k8s:io.kubernetes.pod.namespace=default
18:45:16 k8s:zgroup=testDSClient
18:45:16
18:45:16 Stderr:
18:45:16
18:45:16
18:45:16 ===================== Exiting AfterFailed =====================
18:45:16 <Checks>
18:45:16 Number of "context deadline exceeded" in logs: 0
18:45:16 Number of "level=error" in logs: 0
18:45:16 Number of "level=warning" in logs: 0
18:45:16 Number of "Cilium API handler panicked" in logs: 0
18:45:16 Number of "Goroutine took lock for more than" in logs: 0
18:45:16 No errors/warnings found in logs
18:45:16 Cilium pods: [cilium-nws8f cilium-tmz5l]
18:45:16 Netpols loaded:
18:45:16 CiliumNetworkPolicies loaded:
18:45:16 Endpoint Policy Enforcement:
18:45:16 Pod Ingress Egress
18:45:16 test-k8s2-848b6f7864-wtdsq
18:45:16 testclient-jtsw2
18:45:16 testclient-kbt2w
18:45:16 testds-fh7kp
18:45:16 testds-l22kb
18:45:16 coredns-687db6485c-scn4s
18:45:16 Cilium agent 'cilium-nws8f': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 19 Failed 0
18:45:16 Cilium agent 'cilium-tmz5l': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
18:45:16
18:45:16 </Checks>
18:45:16
18:45:16 [[ATTACHMENT|3db90426_K8sServicesTest_Checks_service_across_nodes_Supports_IPv4_fragments.zip]]
18:45:16
18:45:16 • Failure [65.024 seconds]
18:45:16 K8sServicesTest
18:45:16 /home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:395
18:45:16 Checks service across nodes
18:45:16 /home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:395
18:45:16 Supports IPv4 fragments [It]
18:45:16 /home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:430
18:45:16
18:45:16 Failed to account for IPv4 fragments (in)
[2020-04-10T01:45:16.250Z] Expected
[2020-04-10T01:45:16.250Z] <[]int | len:2, cap:2>: [21, 20]
[2020-04-10T01:45:16.250Z] To satisfy at least one of these matchers: [%!s(*matchers.EqualMatcher=&{[16 24]}) %!s(*matchers.EqualMatcher=&{[20 20]})]
18:45:16
18:45:16 /home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/k8sT/Services.go:548
18:45:16 ------------------------------
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 16 (16 by maintainers)
Commits related to this issue
- test/K8sServices: send datagrams in one block for fragment support tests Tests for IPv4 fragments support introduced a flake in the CI. The test consists in sending a fragmented datagram and counting... — committed to cilium/cilium by qmonnet 4 years ago
- test/K8sServices: send datagrams in one block for fragment support tests Tests for IPv4 fragments support introduced a flake in the CI. The test consists in sending a fragmented datagram and counting... — committed to cilium/cilium by qmonnet 4 years ago
- WIP: test: Fix fragment tracking test under KUBEPROXY=1 The fragment tracking test currently fails when kube-proxy is enabled because the source IP addresses are sometimes SNATed and the destination ... — committed to cilium/cilium by pchaigno 4 years ago
- test: Fix fragment tracking test under KUBEPROXY=1 The fragment tracking test currently fails when kube-proxy is enabled because the destination IP address and port are sometimes not DNATed. The awk ... — committed to cilium/cilium by pchaigno 4 years ago
- test: Fix fragment tracking test under KUBEPROXY=1 The fragment tracking test currently fails when kube-proxy is enabled because the destination IP address and port are sometimes not DNATed. The awk ... — committed to cilium/cilium by pchaigno 4 years ago
- test/K8sServices: disable fragment tracking test for kernel 4.19 We are currently seeing flakes on the tests for fragment tracking on 4.19 (tracked in #10929), let's temporarily disable the test for ... — committed to cilium/cilium by qmonnet 4 years ago
- test/K8sServices: disable fragment tracking test for kernel 4.19 We are currently seeing flakes on the tests for fragment tracking on 4.19 (tracked in #10929), let's temporarily disable the test for ... — committed to cilium/cilium by qmonnet 4 years ago
- test/K8sServices: disable fragment tracking test for kernel 4.19 [ upstream commit 1120aed4144c6c558aaa9d7189e1864deef70513 ] We are currently seeing flakes on the tests for fragment tracking on 4.1... — committed to cilium/cilium by qmonnet 4 years ago
- test/K8sServices: disable fragment tracking test for kernel 4.19 [ upstream commit 1120aed4144c6c558aaa9d7189e1864deef70513 ] We are currently seeing flakes on the tests for fragment tracking on 4.1... — committed to cilium/cilium by qmonnet 4 years ago
Is this the info you’re after? https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Kernel/2067/execution/node/130/log/
(From failure in #11977)
Given that we suddenly started seeing this again today, I think it’ll be real interesting to look at the changes that went in during the last ~24h to see what could be causing it.
Oh wait a second. That first packet has 512 bytes of data, and I send my packets with:
It could be that most of the time, the blocks are copied fast enough that netcat sends it in one go. But maybe at times, there is enough latency between them that netcat decides to fire the first one first, resulting in an additional packet.
I’ll change it to:
or equivalent and submit a PR, hopefully that should fix the flake.
@joestringer That was my first idea as well, but @qmonnet did filter on the 5-tuple. It might be helpful to add some additional output to the test to deduce if it’s a specific path failing.