kubernetes: kubelet: panic runtime error: invalid memory address or nil pointer dereference in net.IsIPv4String

What happened:

In https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1401222770964041728 one of kubelets panicked with error like:

I0605 19:32:06.921560    2102 kubelet.go:1976] "SyncLoop (PLEG): event for pod" pod="test-b7x1na-8/small-deployment-245-84cdf6f8f5-znc2g" event=&{ID:3e8a37c1-dbb0-434f-b51a-4af2d5f7b25e Type:ContainerDied Data:4085c130ddda52ae1d0655ecb410ff32f452f7f53cceb368e73d42dbef6fbbb5}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x55db470207b9]
goroutine 812 [running]:
net.ParseIP(0x0, 0xa, 0x55db4b52cca0, 0xc0011e0d01, 0xc003e658b0)
	/usr/local/go/src/net/ip.go:680 +0x39
k8s.io/kubernetes/vendor/k8s.io/utils/net.IsIPv4String(0x0, 0xa, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/utils/net/net.go:147 +0x3b
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).convertStatusToAPIStatus(0xc000691500, 0xc001765c00, 0xc001059a80, 0xc00027eba0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:1591 +0x187
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).generateAPIPodStatus(0xc000691500, 0xc001765c00, 0xc001059a80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:1516 +0x1d1
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).cleanUpContainersInPod(0xc000691500, 0xc0027e9650, 0x24, 0xc001260f00, 0x40)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:2289 +0x150
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).syncLoopIteration(0xc000691500, 0xc000bf3f20, 0x55db4bdbc770, 0xc000691500, 0xc0018476e0, 0xc001847740, 0xc000f485a0, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1986 +0x6ae
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).syncLoop(0xc000691500, 0xc000bf3f20, 0x55db4bdbc770, 0xc000691500)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1886 +0x35b
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).Run(0xc000691500, 0xc000bf3f20)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1466 +0x28c
created by k8s.io/kubernetes/cmd/kubelet/app.startKubelet
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:1200 +0x57
kubelet.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
kubelet.service: Failed with result 'exit-code'.
kubelet.service: Consumed 11min 36.305s CPU time
kubelet.service: Service RestartSec=10s expired, scheduling restart.
kubelet.service: Scheduled restart job, restart counter is at 1.


https://storage.googleapis.com/sig-scalability-logs/ci-kubernetes-e2e-gce-scale-performance/1401222770964041728/gce-scale-cluster-minion-group-1-v2lp/kubelet.log

What you expected to happen:

kubelet doesn’t panic

How to reproduce it (as minimally and precisely as possible):

I don’t know unfortunately, but it’s a second time I’m seeing this kind of issue in our tests. I hope stacktrace will be helpful to debug this.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 17 (17 by maintainers)

Most upvoted comments

@cndoit18 That also crashes when deferencing *point rather than ever making it to net.ParseIP. I don’t think there’s a way you can break the string len like this easily because SetString under the hood derefs the value pointer and assigns the new value.