metallb: speaker observed a panic
- Version of MetalLB: v0.11.0
- Version of Kubernetes: v1.22
- Name and version of network addon: calico
- Whether you’ve configured kube-proxy for iptables or ipvs mode: iptables
- I created a service by
kubectl apply -f
, the description of service is below shown:
[root@master deploy]# cat hello-svc1.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-service2
labels:
name: hello-service2
spec:
type: ClusterIP
ports:
- port: 18081
targetPort: 8080
protocol: TCP
selector:
app: hello
- when I created the service, and I found the pod of speaker was CrashLoopBackOff. and I look its log, as shown below:
E0120 08:30:10.190897 1 runtime.go:78] Observed a panic: &errors.errorString{s:"unable to calculate an index entry for key \"submariner-k8s-broker/hello-service2-cluster-a\" on index \"ServiceName\": endpointSlice missing kubernetes.io/service-name label"} (unable to calculate an index entry for key "submariner-k8s-broker/hello-service2-cluster-a" on index "ServiceName": endpointSlice missing kubernetes.io/service-name label)
goroutine 31 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x15a5c00, 0xc000160cf0)
/home/circleci/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/home/circleci/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/runtime/runtime.go:48 +0x86
panic(0x15a5c00, 0xc000160cf0)
/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/client-go/tools/cache.(*threadSafeMap).updateIndices(0xc0001a7b60, 0x0, 0x0, 0x176bf20, 0xc0002cdab0, 0xc00073c2d0, 0x2e)
/home/circleci/go/pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/thread_safe_store.go:264 +0x4bd
k8s.io/client-go/tools/cache.(*threadSafeMap).Add(0xc0001a7b60, 0xc00073c2d0, 0x2e, 0x176bf20, 0xc0002cdab0)
/home/circleci/go/pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/thread_safe_store.go:78 +0x145
k8s.io/client-go/tools/cache.(*cache).Add(0xc0000c7830, 0x176bf20, 0xc0002cdab0, 0x0, 0x0)
/home/circleci/go/pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/store.go:150 +0x105
k8s.io/client-go/tools/cache.newInformer.func1(0x15debc0, 0xc00000cf48, 0x1, 0xc00000cf48)
/home/circleci/go/pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/controller.go:404 +0x15b
k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc000546280, 0xc0001a7bf0, 0x0, 0x0, 0x0, 0x0)
/home/circleci/go/pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/delta_fifo.go:507 +0x322
k8s.io/client-go/tools/cache.(*controller).processLoop(0xc0002ae3f0)
/home/circleci/go/pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/controller.go:183 +0x42
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00057ff90)
/home/circleci/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00057ff90, 0x1946be0, 0xc0001a6000, 0xc000222001, 0xc0002b6000)
/home/circleci/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00057ff90, 0x3b9aca00, 0x0, 0xc00020c201, 0xc0002b6000)
/home/circleci/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
/home/circleci/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*controller).Run(0xc0002ae3f0, 0xc0002b6000)
/home/circleci/go/pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/controller.go:154 +0x2e5
created by go.universe.tf/metallb/internal/k8s.(*Client).Run
/home/circleci/project/internal/k8s/k8s.go:397 +0x375
Does anyone know what’s going on here? Thanks!
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 1
- Comments: 17 (15 by maintainers)
Yep, the problem is, the error is caused by a callback to the informer code where we provide a key for the ep slices. Not using the service name is as much as wrong and may cause other issues in metallb. I am not sure we can handle it gracefully, but I did not dug deeply in the issue, so I may be wrong here