kubernetes: kubernetes-soak-continuous-e2e-gke: broken test run
Multiple broken tests:
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Aug 5 16:19:48.006: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-b71d6a87-nxbv:
container "runtime": expected RSS memory (MB) < 314572800; got 322433024
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153
Issues about this test specifically: #26982
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:724
Expected error:
<*errors.errorString | 0xc821d7e000>: {
s: "Error running &{/jenkins-master-data/jobs/kubernetes-soak-weekly-deploy-gke/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.146.98 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-soak-weekly-deploy-gke/workspace/.kube/config create -f test/e2e/testing-manifests/node-selection/pod-with-node-affinity.yaml --namespace=e2e-tests-sched-pred-42gfr] [] <nil> the path \"test/e2e/testing-manifests/node-selection/pod-with-node-affinity.yaml\" does not exist\n [] <nil> 0xc821d78b80 exit status 1 <nil> true [0xc820539198 0xc8205391b0 0xc8205391c8] [0xc820539198 0xc8205391b0 0xc8205391c8] [0xc8205391a8 0xc8205391c0] [0xaba950 0xaba950] 0xc821d6e960}:\nCommand stdout:\n\nstderr:\nthe path \"test/e2e/testing-manifests/node-selection/pod-with-node-affinity.yaml\" does not exist\n\nerror:\nexit status 1\n",
}
Error running &{/jenkins-master-data/jobs/kubernetes-soak-weekly-deploy-gke/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.146.98 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-soak-weekly-deploy-gke/workspace/.kube/config create -f test/e2e/testing-manifests/node-selection/pod-with-node-affinity.yaml --namespace=e2e-tests-sched-pred-42gfr] [] <nil> the path "test/e2e/testing-manifests/node-selection/pod-with-node-affinity.yaml" does not exist
[] <nil> 0xc821d78b80 exit status 1 <nil> true [0xc820539198 0xc8205391b0 0xc8205391c8] [0xc820539198 0xc8205391b0 0xc8205391c8] [0xc8205391a8 0xc8205391c0] [0xaba950 0xaba950] 0xc821d6e960}:
Command stdout:
stderr:
the path "test/e2e/testing-manifests/node-selection/pod-with-node-affinity.yaml" does not exist
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2006
Issues about this test specifically: #29816 #30018
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Aug 5 13:25:55.168: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-b71d6a87-nxbv:
container "runtime": expected RSS memory (MB) < 89128960; got 91500544
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153
Issues about this test specifically: #26784 #28384
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Aug 5 14:53:30.486: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-b71d6a87-qrai:
container "runtime": expected RSS memory (MB) < 157286400; got 157679616
node gke-jenkins-e2e-default-pool-b71d6a87-rqt8:
container "runtime": expected RSS memory (MB) < 157286400; got 159928320
node gke-jenkins-e2e-default-pool-b71d6a87-nxbv:
container "runtime": expected RSS memory (MB) < 157286400; got 170569728
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:153
Issues about this test specifically: #28220
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 727 (717 by maintainers)
@girishkalele, it’s a known issue that docker uses more memory on GCI, and that is being investigated. In this case, the gke build was using containervm (not gci). the problem is that docker v1.11 has a memory leak, and in the soak cluster, the leak will eventually become significant and fail the test. See #28124 for more details.