kubernetes: Make failure for test-e2e-node
What happened:
Build failed for make test-e2e-node
I0526 21:26:18.166413 56943 run_local.go:76] Running command: /home/zhi/os_k8s/go/src/k8s.io/kubernetes/_output/local/go/bin/ginkgo -nodes=8 -skip="\[Flaky\]|\[Slow\]|\[Serial\]" -untilItFails=false /home/zhi/os_k8s/go/src/k8s.io/kubernetes/_output/local/go/bin/e2e_node.test -- --container-runtime=docker --alsologtostderr --v 4 --report-dir=/tmp/_artifacts/200526T212556 --node-name owlthebird --kubelet-flags="--container-runtime=docker" --kubelet-flags="--network-plugin= --cni-bin-dir="
Running Suite: E2eNode Suite
============================
Random Seed: 1590553578 - Will randomize all specs
Will run 321 specs
Running in parallel across 8 nodes
Failure [120.884 seconds]
[BeforeSuite] BeforeSuite
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:174
Node 1 disappeared before completing BeforeSuite
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:174
------------------------------
Failure [120.856 seconds]
[BeforeSuite] BeforeSuite
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:174
Node 1 disappeared before completing BeforeSuite
When I use a smaller number of parallelism
, like: make test-e2e-node PARALLELISM=1
, Node disappear
error went away but build still failed.
I0527 13:23:59.250210 24454 server.go:176] Waiting for server "kubelet" start command to complete after initial health check failed
F0527 13:23:59.250267 24454 server.go:180] Restart loop readinessCheck failed for server "kubelet" start-command: `/usr/bin/systemd-run -p Delegate=true --unit=kubelet-20200527T132159.service --slice=runtime.slice --remain-after-exit /home/zhi/os_k8s/go/src/k8s.io/kubernetes/_output/local/go/bin/kubelet --kubeconfig /home/zhi/os_k8s/go/src/k8s.io/kubernetes/_output/local/go/bin/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --dynamic-config-dir /home/zhi/os_k8s/go/src/k8s.io/kubernetes/_output/local/go/bin/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /home/zhi/os_k8s/go/src/k8s.io/kubernetes/_output/local/go/bin/cni/bin --cni-conf-dir /home/zhi/os_k8s/go/src/k8s.io/kubernetes/_output/local/go/bin/cni/net.d --cni-cache-dir /home/zhi/os_k8s/go/src/k8s.io/kubernetes/_output/local/go/bin/cni/cache --hostname-override owlthebird --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /home/zhi/os_k8s/go/src/k8s.io/kubernetes/_output/local/go/bin/kubelet-config --container-runtime=docker --network-plugin= --cni-bin-dir=`, kill-command: `/bin/systemctl kill kubelet-20200527T132159.service`, restart-command: `/bin/systemctl restart kubelet-20200527T132159.service`, health-check: [http://127.0.0.1:10255/healthz], output-file: "kubelet.log"
What you expected to happen: Build success.
How to reproduce it (as minimally and precisely as possible):
Go to root directory and run make test-e2e-node
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
):
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v0.0.0-master+$Format:%h$", GitCommit:"$Format:%H$", GitTreeState:"", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.0-beta.0.192+a79c711191d5c0-dirty", GitCommit:"a79c711191d5c0a9dca4fbaba26ef35e476f5871", GitTreeState:"dirty", BuildDate:"2020-05-27T02:31:20Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
-
Cloud provider or hardware configuration: N/A
-
OS (e.g:
cat /etc/os-release
):
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
- Kernel (e.g.
uname -a
):
Linux XXXXX 5.3.0-53-generic #47~18.04.1-Ubuntu SMP
- Install tools:
- Network plugin and version (if this is a network-related bug):
- Others:
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 17 (17 by maintainers)
I looked closely at the kubelet log and it looks like a swap file issue.
I turned off swap and although the test failed but the original errors went away!
/assign
@ZhiFeng1993 the first thing to check off: do you have swap enabled? What do you see if you run
?
This looks like a run on a local system or local VM and not remotely. Is that right?
The kubelet.log can be found with
journalctl -u kubelet-20200527T132159.service
.I had an issue with swap preventing me from running locally.