kube-state-metrics: panic after fixing jobs rbac
Hi
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:30:51Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
after upgrading to v1.0.1 from 0.4.1 I found I needed some further rbac perimissions, no problems.
Once I added jobs as below:
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- list
- watch
I started receiving:
[prometheus-prometheus-kube-state-metrics-1085824839-jvcz4] panic: runtime error: invalid memory address or nil pointer dereference
[prometheus-prometheus-kube-state-metrics-1085824839-jvcz4] [signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x11e0172]
[prometheus-prometheus-kube-state-metrics-1085824839-jvcz4]
[prometheus-prometheus-kube-state-metrics-1085824839-jvcz4] goroutine 136 [running]:
[prometheus-prometheus-kube-state-metrics-1085824839-jvcz4] time.time.Time.Unix(...)
[prometheus-prometheus-kube-state-metrics-1085824839-jvcz4] /usr/local/google/home/kawych/go-tools/src/k8s.io/kube-state-metrics/collectors/job.go:173
[prometheus-prometheus-kube-state-metrics-1085824839-jvcz4] k8s.io/kube-state-metrics/collectors.(*jobCollector).collectJob(0xc420394bd0, 0xc4205cdaa0, 0x0, 0x0, 0x0, 0x0, 0xc4215c0c00, 0x1e, 0x0, 0x0, ...)
[prometheus-prometheus-kube-state-metrics-1085824839-jvcz4] /usr/local/google/home/kawych/go-tools/src/k8s.io/kube-state-metrics/collectors/job.go:173 +0x1f2
[prometheus-prometheus-kube-state-metrics-1085824839-jvcz4] k8s.io/kube-state-metrics/collectors.(*jobCollector).Collect(0xc420394bd0, 0xc4205cdaa0)
[prometheus-prometheus-kube-state-metrics-1085824839-jvcz4] /usr/local/google/home/kawych/go-tools/src/k8s.io/kube-state-metrics/collectors/job.go:141 +0x12f
[prometheus-prometheus-kube-state-metrics-1085824839-jvcz4] /usr/local/google/home/kawych/go-tools/src/k8s.io/kube-state-metrics/vendor/github.com/prometheus/client_golang/prometheus/registry.go:433 +0x61
[prometheus-prometheus-kube-state-metrics-1085824839-jvcz4] created by k8s.io/kube-state-metrics/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather
[prometheus-prometheus-kube-state-metrics-1085824839-jvcz4] /usr/local/google/home/kawych/go-tools/src/k8s.io/kube-state-metrics/vendor/github.com/prometheus/client_golang/prometheus/registry.go:431 +0x271
If I disable the rbac permissions and just leave this
[prometheus-prometheus-kube-state-metrics-1085824839-npk9s] E0918 01:29:10.670107 1 reflector.go:201] k8s.io/kube-state-metrics/collectors/job.go:106: Failed to list *v1.Job: User "system:serviceaccount:devops:prometheus-prometheus-kube-state-metrics" cannot list jobs.batch at the cluster scope. (get jobs.batch)
in the logs, at least kube-state-metrics stops crashing.
Thanks
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 15 (10 by maintainers)
yeah, this new release looks to have reoslved it.