cilium: CI: K8sServicesTest Checks graceful termination of service endpoints Checks client terminates gracefully on service endpoint deletion

Test Name

K8sServicesTest Checks graceful termination of service endpoints Checks client terminates gracefully on service endpoint deletion

Failure Output

FAIL: panic: resolve tcp address failed [graceful-term-svc.default.svc.cluster.local.:8081]:: lookup graceful-term-svc.default.svc.cluster.local.: no such host

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
panic: resolve tcp address failed [graceful-term-svc.default.svc.cluster.local.:8081]:: lookup graceful-term-svc.default.svc.cluster.local.: no such host

goroutine 1 [running]:
main.panicOnErr(...)
	/app/main.go:69
main.main()
	/app/main.go:86 +0x26f
 is not in the output after timeout
Expected
    <*errors.errorString | 0xc002006b40>: {
        s: "client received is not in the output after timeout: 4m0s timeout expired",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8sT/Services.go:1636

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 7
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to restore endpoint, ignoring
BPF masquerade requires NodePort (--enable-node-port=\
Cilium pods: [cilium-l4nxw cilium-s29kv]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
graceful-term-server                    
testclient-6dm6z                        
testclient-fmj5v                        
coredns-755cd654d4-fhrbx                
grafana-5747bcc8f9-cp4zn                
prometheus-655fb888d7-bph79             
graceful-term-client                    
Cilium agent 'cilium-l4nxw': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 46 Failed 0
Cilium agent 'cilium-s29kv': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 23 Failed 0
Cilium pods: [cilium-l4nxw cilium-s29kv]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
prometheus-655fb888d7-bph79             
graceful-term-client                    
graceful-term-server                    
testclient-6dm6z                        
testclient-fmj5v                        
coredns-755cd654d4-fhrbx                
grafana-5747bcc8f9-cp4zn                
Cilium agent 'cilium-l4nxw': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 46 Failed 0
Cilium agent 'cilium-s29kv': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 23 Failed 0


Standard Error

Click to show.
17:27:17 STEP: Running BeforeAll block for EntireTestsuite K8sServicesTest Checks graceful termination of service endpoints
17:27:17 STEP: Installing Cilium
17:27:20 STEP: Waiting for Cilium to become ready
17:27:41 STEP: Validating if Kubernetes DNS is deployed
17:27:41 STEP: Checking if deployment is ready
17:27:41 STEP: Checking if kube-dns service is plumbed correctly
17:27:41 STEP: Checking if DNS can resolve
17:27:41 STEP: Checking if pods have identity
17:27:42 STEP: Kubernetes DNS is up and operational
17:27:42 STEP: Validating Cilium Installation
17:27:42 STEP: Performing Cilium controllers preflight check
17:27:42 STEP: Performing Cilium health check
17:27:42 STEP: Performing Cilium status preflight check
17:27:43 STEP: Performing Cilium service preflight check
17:27:43 STEP: Performing K8s service preflight check
17:27:43 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-l4nxw': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

17:27:43 STEP: Performing Cilium controllers preflight check
17:27:43 STEP: Performing Cilium health check
17:27:43 STEP: Performing Cilium status preflight check
17:27:45 STEP: Performing Cilium service preflight check
17:27:45 STEP: Performing K8s service preflight check
17:27:45 STEP: Performing Cilium status preflight check
17:27:45 STEP: Performing Cilium health check
17:27:45 STEP: Performing Cilium controllers preflight check
17:27:47 STEP: Performing Cilium service preflight check
17:27:47 STEP: Performing K8s service preflight check
17:27:47 STEP: Performing Cilium status preflight check
17:27:47 STEP: Performing Cilium controllers preflight check
17:27:47 STEP: Performing Cilium health check
17:27:48 STEP: Performing Cilium service preflight check
17:27:48 STEP: Performing K8s service preflight check
17:27:48 STEP: Performing Cilium status preflight check
17:27:48 STEP: Performing Cilium controllers preflight check
17:27:48 STEP: Performing Cilium health check
17:27:49 STEP: Performing Cilium service preflight check
17:27:49 STEP: Performing K8s service preflight check
17:27:50 STEP: Waiting for cilium-operator to be ready
17:27:50 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
17:27:51 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
17:27:51 STEP: Running BeforeEach block for EntireTestsuite K8sServicesTest Checks graceful termination of service endpoints
17:27:51 STEP: WaitforPods(namespace="default", filter="-l app=graceful-term-server")
17:27:54 STEP: WaitforPods(namespace="default", filter="-l app=graceful-term-server") => <nil>
17:27:55 STEP: WaitforPods(namespace="default", filter="-l app=graceful-term-client")
17:27:55 STEP: WaitforPods(namespace="default", filter="-l app=graceful-term-client") => <nil>
FAIL: panic: resolve tcp address failed [graceful-term-svc.default.svc.cluster.local.:8081]:: lookup graceful-term-svc.default.svc.cluster.local.: no such host

goroutine 1 [running]:
main.panicOnErr(...)
	/app/main.go:69
main.main()
	/app/main.go:86 +0x26f
 is not in the output after timeout
Expected
    <*errors.errorString | 0xc002006b40>: {
        s: "client received is not in the output after timeout: 4m0s timeout expired",
    }
to be nil
=== Test Finished at 2021-11-12T17:31:55Z====
17:31:55 STEP: Running JustAfterEach block for EntireTestsuite K8sServicesTest
===================== TEST FAILED =====================
17:31:56 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest Checks graceful termination of service endpoints
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-5747bcc8f9-cp4zn           1/1     Running   0          36m     10.0.1.177      k8s2   <none>           <none>
	 cilium-monitoring   prometheus-655fb888d7-bph79        1/1     Running   0          36m     10.0.1.184      k8s2   <none>           <none>
	 default             graceful-term-client               1/1     Running   5          4m6s    10.0.1.58       k8s2   <none>           <none>
	 default             graceful-term-server               1/1     Running   0          4m6s    10.0.1.106      k8s2   <none>           <none>
	 default             testclient-6dm6z                   1/1     Running   0          4m6s    10.0.1.74       k8s2   <none>           <none>
	 default             testclient-fmj5v                   1/1     Running   0          4m6s    10.0.0.139      k8s1   <none>           <none>
	 kube-system         cilium-l4nxw                       1/1     Running   0          4m37s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-54b586fdc8-j8vsh   1/1     Running   0          4m37s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-54b586fdc8-tbhxc   1/1     Running   0          4m37s   192.168.36.11   k8s1   <none>           <none>
	 kube-system         cilium-s29kv                       1/1     Running   0          4m37s   192.168.36.11   k8s1   <none>           <none>
	 kube-system         coredns-755cd654d4-fhrbx           1/1     Running   0          35m     10.0.1.141      k8s2   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          38m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          38m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0          38m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-jzrl8                   1/1     Running   0          38m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-x65z7                   1/1     Running   0          37m     192.168.36.12   k8s2   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0          38m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-p6j4q                 1/1     Running   0          36m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-sxgmc                 1/1     Running   0          36m     192.168.36.12   k8s2   <none>           <none>
	 kube-system         registry-adder-htpsf               1/1     Running   0          37m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         registry-adder-n9zxd               1/1     Running   0          37m     192.168.36.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-l4nxw cilium-s29kv]
cmd: kubectl exec -n kube-system cilium-l4nxw -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                   
	 1    10.96.0.1:443        ClusterIP      1 => 192.168.36.11:6443   
	 2    10.96.0.10:53        ClusterIP      1 => 10.0.1.141:53        
	 3    10.96.0.10:9153      ClusterIP      1 => 10.0.1.141:9153      
	 4    10.100.213.67:3000   ClusterIP      1 => 10.0.1.177:3000      
	 5    10.107.167.3:9090    ClusterIP      1 => 10.0.1.184:9090      
	 6    10.98.9.51:8081      ClusterIP      1 => 10.0.1.106:8081      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-l4nxw -c cilium-agent -- cilium bpf lb list
Exitcode: 0 
Stdout:
 	 SERVICE ADDRESS      BACKEND ADDRESS
	 10.107.167.3:9090    0.0.0.0:0 (5) [ClusterIP, non-routable]   
	                      10.0.1.184:9090 (5)                       
	 10.100.213.67:3000   10.0.1.177:3000 (4)                       
	                      0.0.0.0:0 (4) [ClusterIP, non-routable]   
	 10.98.9.51:8081      10.0.1.106:8081 (6)                       
	                      0.0.0.0:0 (6) [ClusterIP, non-routable]   
	 10.96.0.1:443        0.0.0.0:0 (1) [ClusterIP, non-routable]   
	                      192.168.36.11:6443 (1)                    
	 10.96.0.10:53        10.0.1.141:53 (2)                         
	                      0.0.0.0:0 (2) [ClusterIP, non-routable]   
	 10.96.0.10:9153      10.0.1.141:9153 (3)                       
	                      0.0.0.0:0 (3) [ClusterIP, non-routable]   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-s29kv -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                   
	 1    10.96.0.10:53        ClusterIP      1 => 10.0.1.141:53        
	 2    10.96.0.10:9153      ClusterIP      1 => 10.0.1.141:9153      
	 3    10.100.213.67:3000   ClusterIP      1 => 10.0.1.177:3000      
	 4    10.107.167.3:9090    ClusterIP      1 => 10.0.1.184:9090      
	 5    10.96.0.1:443        ClusterIP      1 => 192.168.36.11:6443   
	 6    10.98.9.51:8081      ClusterIP      1 => 10.0.1.106:8081      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-s29kv -c cilium-agent -- cilium bpf lb list
Exitcode: 0 
Stdout:
 	 SERVICE ADDRESS      BACKEND ADDRESS
	 10.96.0.10:53        0.0.0.0:0 (1) [ClusterIP, non-routable]   
	                      10.0.1.141:53 (1)                         
	 10.96.0.1:443        0.0.0.0:0 (5) [ClusterIP, non-routable]   
	                      192.168.36.11:6443 (5)                    
	 10.100.213.67:3000   0.0.0.0:0 (3) [ClusterIP, non-routable]   
	                      10.0.1.177:3000 (3)                       
	 10.98.9.51:8081      0.0.0.0:0 (6) [ClusterIP, non-routable]   
	                      10.0.1.106:8081 (6)                       
	 10.107.167.3:9090    10.0.1.184:9090 (4)                       
	                      0.0.0.0:0 (4) [ClusterIP, non-routable]   
	 10.96.0.10:9153      10.0.1.141:9153 (2)                       
	                      0.0.0.0:0 (2) [ClusterIP, non-routable]   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
===================== TEST FAILED =====================
17:32:11 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-5747bcc8f9-cp4zn           1/1     Running   0          36m     10.0.1.177      k8s2   <none>           <none>
	 cilium-monitoring   prometheus-655fb888d7-bph79        1/1     Running   0          36m     10.0.1.184      k8s2   <none>           <none>
	 default             graceful-term-client               0/1     Error     5          4m21s   10.0.1.58       k8s2   <none>           <none>
	 default             graceful-term-server               1/1     Running   0          4m21s   10.0.1.106      k8s2   <none>           <none>
	 default             testclient-6dm6z                   1/1     Running   0          4m21s   10.0.1.74       k8s2   <none>           <none>
	 default             testclient-fmj5v                   1/1     Running   0          4m21s   10.0.0.139      k8s1   <none>           <none>
	 kube-system         cilium-l4nxw                       1/1     Running   0          4m52s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-54b586fdc8-j8vsh   1/1     Running   0          4m52s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-54b586fdc8-tbhxc   1/1     Running   0          4m52s   192.168.36.11   k8s1   <none>           <none>
	 kube-system         cilium-s29kv                       1/1     Running   0          4m52s   192.168.36.11   k8s1   <none>           <none>
	 kube-system         coredns-755cd654d4-fhrbx           1/1     Running   0          35m     10.0.1.141      k8s2   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          39m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          39m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0          39m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-jzrl8                   1/1     Running   0          39m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-x65z7                   1/1     Running   0          37m     192.168.36.12   k8s2   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0          39m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-p6j4q                 1/1     Running   0          36m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-sxgmc                 1/1     Running   0          36m     192.168.36.12   k8s2   <none>           <none>
	 kube-system         registry-adder-htpsf               1/1     Running   0          37m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         registry-adder-n9zxd               1/1     Running   0          37m     192.168.36.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-l4nxw cilium-s29kv]
cmd: kubectl exec -n kube-system cilium-l4nxw -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                   
	 1    10.96.0.1:443        ClusterIP      1 => 192.168.36.11:6443   
	 2    10.96.0.10:53        ClusterIP      1 => 10.0.1.141:53        
	 3    10.96.0.10:9153      ClusterIP      1 => 10.0.1.141:9153      
	 4    10.100.213.67:3000   ClusterIP      1 => 10.0.1.177:3000      
	 5    10.107.167.3:9090    ClusterIP      1 => 10.0.1.184:9090      
	 6    10.98.9.51:8081      ClusterIP      1 => 10.0.1.106:8081      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-l4nxw -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                        IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                              
	 342        Disabled           Disabled          9833       k8s:app=grafana                                                                    fd02::19b   10.0.1.177   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                    
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                   
	 584        Disabled           Disabled          11869      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system         fd02::13c   10.0.1.141   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                         
	                                                            k8s:k8s-app=kube-dns                                                                                                
	 709        Disabled           Disabled          50156      k8s:app=graceful-term-server                                                       fd02::1d6   10.0.1.106   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                              
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                             
	 1055       Disabled           Disabled          1151       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::17a   10.0.1.74    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                             
	                                                            k8s:zgroup=testDSClient                                                                                             
	 1293       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                  ready   
	                                                            reserved:host                                                                                                       
	 2089       Disabled           Disabled          4          reserved:health                                                                    fd02::170   10.0.1.91    ready   
	 3004       Disabled           Disabled          45859      k8s:app=prometheus                                                                 fd02::1ba   10.0.1.184   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                    
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                              
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                   
	 3341       Disabled           Disabled          35337      k8s:app=graceful-term-client                                                       fd02::1a5   10.0.1.58    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                              
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                             
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-s29kv -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                   
	 1    10.96.0.10:53        ClusterIP      1 => 10.0.1.141:53        
	 2    10.96.0.10:9153      ClusterIP      1 => 10.0.1.141:9153      
	 3    10.100.213.67:3000   ClusterIP      1 => 10.0.1.177:3000      
	 4    10.107.167.3:9090    ClusterIP      1 => 10.0.1.184:9090      
	 5    10.96.0.1:443        ClusterIP      1 => 192.168.36.11:6443   
	 6    10.98.9.51:8081      ClusterIP      1 => 10.0.1.106:8081      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-s29kv -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                              IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                   
	 937        Disabled           Disabled          1151       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::77   10.0.0.139   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                          
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                  
	 1656       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                       ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                
	                                                            k8s:node-role.kubernetes.io/master                                                                       
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                              
	                                                            reserved:host                                                                                            
	 2591       Disabled           Disabled          4          reserved:health                                                          fd02::3d   10.0.0.53    ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
17:32:26 STEP: Running AfterEach for block EntireTestsuite K8sServicesTest
17:32:26 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|c2e271e6_K8sServicesTest_Checks_graceful_termination_of_service_endpoints_Checks_client_terminates_gracefully_on_service_endpoint_deletion.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//144/artifact/c2e271e6_K8sServicesTest_Checks_graceful_termination_of_service_endpoints_Checks_client_terminates_gracefully_on_service_endpoint_deletion.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//144/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_144_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/144/

If this is a duplicate of an existing flake, comment ‘Duplicate of #<issue-number>’ and close this issue.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 24 (11 by maintainers)

Most upvoted comments

The graceful-term-svc is the only service that selects this backend. When the backend shows up in BPF map it would be for the service cluster IP.

@aditighag Imagine that someone reuses this method w/o knowing internals of it for a backend which belongs to multiple svc.

The test is failing because of an unknown issue in core dns.

I’m wondering why this is happening more often for this test case than for others?

How would revert help? It would just mask the issue as I’m seeing the NXDOMAIN (no domain exists) error for many other test services (e.g., #17919).

We quarantine flaky tests to save devs time which is spent looking why their PR has failed.