kube-proxy: segmentation fault attempting dual stack IPv4/IPv6
This is running version 1.16.2 of Kubernetes.
Using instructions from https://kubernetes.io/docs/concepts/services-networking/dual-stack/
I notice a similar bug report https://github.com/kubernetes/website/issues/16801
- translated to English: https://translate.google.com/translate?hl=en&sl=zh-CN&u=https://github.com/kubernetes/website/issues/16801&prev=search
- I notice that the ipv4 values of
--cluster-cidrand--service-cluster-ip-rangeoverlap, wondering if this is an error, my understanding is that they should be distinct. - I notice that the ipv6 values don’t overlap.
- These instructions don’t mention anything about configuring docker to use IPv6, which was done in above bug report - is this step required (doesn’t seem to make any difference…)?
… but the stack trace I am getting (from kube-proxy) looks rather different:
I1104 07:53:26.700832 1 node.go:135] Successfully retrieved node IP: 192.168.3.32
I1104 07:53:26.701526 1 server_others.go:176] Using ipvs Proxier.
I1104 07:53:26.701693 1 server_others.go:178] creating dualStackProxier for ipvs.
W1104 07:53:26.711223 1 proxier.go:420] IPVS scheduler not specified, use rr by default
W1104 07:53:26.712152 1 proxier.go:420] IPVS scheduler not specified, use rr by default
W1104 07:53:26.712217 1 ipset.go:116] ipset name truncated; [KUBE-6-LOAD-BALANCER-SOURCE-CIDR] -> [KUBE-6-LOAD-BALANCER-SOURCE-CID]
W1104 07:53:26.712258 1 ipset.go:116] ipset name truncated; [KUBE-6-NODE-PORT-LOCAL-SCTP-HASH] -> [KUBE-6-NODE-PORT-LOCAL-SCTP-HAS]
I1104 07:53:26.713356 1 server.go:529] Version: v1.16.2
I1104 07:53:26.715168 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1104 07:53:26.717310 1 config.go:131] Starting endpoints config controller
I1104 07:53:26.721953 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1104 07:53:26.719378 1 config.go:313] Starting service config controller
I1104 07:53:26.722334 1 shared_informer.go:197] Waiting for caches to sync for service config
W1104 07:53:26.726310 1 meta_proxier.go:102] failed to add endpoints kube-system/kube-controller-manager with error failed to identify ipfamily for endpoints (no subsets)
W1104 07:53:26.726493 1 meta_proxier.go:102] failed to add endpoints kube-system/kube-dns with error failed to identify ipfamily for endpoints (no addresses)
W1104 07:53:26.726552 1 meta_proxier.go:102] failed to add endpoints kube-system/kube-scheduler with error failed to identify ipfamily for endpoints (no subsets)
E1104 07:53:26.726835 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 56 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x169fd00, 0x2ab03f0)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x169fd00, 0x2ab03f0)
/usr/local/go/src/runtime/panic.go:522 +0x1b5
k8s.io/kubernetes/pkg/proxy/ipvs.(*metaProxier).OnServiceAdd(0xc0003c4020, 0xc000708400)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/proxy/ipvs/meta_proxier.go:61 +0x2b
k8s.io/kubernetes/pkg/proxy/config.(*ServiceConfig).handleAddService(0xc0003c5e00, 0x1871740, 0xc000708400)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/proxy/config/config.go:333 +0x90
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:198
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1.1(0xc00037e600, 0xc00049c6e0, 0xc000044000)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:658 +0x26d
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff(0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0, 0xc0006b2e38, 0x42c62f, 0xc00049c710)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:292 +0x51
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:652 +0x79
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00052f768)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x54
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006b2f68, 0xdf8475800, 0x0, 0x42ce01, 0xc0007046c0)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run(0xc0003e2580)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:650 +0x9c
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000602920, 0xc0005403d0)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x4f
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:69 +0x62
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x14dde1b]
goroutine 56 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105
panic(0x169fd00, 0x2ab03f0)
/usr/local/go/src/runtime/panic.go:522 +0x1b5
k8s.io/kubernetes/pkg/proxy/ipvs.(*metaProxier).OnServiceAdd(0xc0003c4020, 0xc000708400)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/proxy/ipvs/meta_proxier.go:61 +0x2b
k8s.io/kubernetes/pkg/proxy/config.(*ServiceConfig).handleAddService(0xc0003c5e00, 0x1871740, 0xc000708400)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/proxy/config/config.go:333 +0x90
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:198
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1.1(0xc00037e600, 0xc00049c6e0, 0xc000044000)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:658 +0x26d
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff(0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0, 0xc0006b2e38, 0x42c62f, 0xc00049c710)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:292 +0x51
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:652 +0x79
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00052f768)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x54
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006b2f68, 0xdf8475800, 0x0, 0x42ce01, 0xc0007046c0)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run(0xc0003e2580)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:650 +0x9c
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000602920, 0xc0005403d0)
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x4f
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:69 +0x62
It is very possible I have done something wrong, but in any case, it probably shouldn’t segfault on error.
I have also tried exploring the “failed to add endpoints” message, however as far as I can see this actually happens after the segfault (despite order in the log file), and my configuration of cidrs looks correct to me. I cannot see any reason why it seems to be struggling to get endpoints, particularly as IPv4 only operation is good (see next paragraph). My understanding looking at the code is it should work even if it only gets IPv4 addresses, not IPv6 addresses - although I could be wrong.
If I set featureGates: IPv6DualStack: false and remove my IPv6 subnet from the clusterCIDR value in my kube-proxy configmap, everything works perfectly.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 1
- Comments: 15 (1 by maintainers)
Same error on 1.17.3, here are my steps,