kubernetes: Panic in garbage collector related to CRD types

/kind bug

What happened:

kube-controller-manager has started crashing due to a panic due to garbage collection code.

What you expected to happen:

kube-controller-manager should not panic

How to reproduce it (as minimally and precisely as possible):

TBC - I’m not sure what has caused this state yet

Anything else we need to know?:

Log messages from kube-controller-manager:

2017-09-20 16:48:48.040595 I | proto: duplicate proto type registered: google.protobuf.Any
2017-09-20 16:48:48.042075 I | proto: duplicate proto type registered: google.protobuf.Duration
2017-09-20 16:48:48.042469 I | proto: duplicate proto type registered: google.protobuf.Timestamp
I0920 16:48:48.216529       1 controllermanager.go:107] Version: v1.7.2+coreos.0
I0920 16:48:48.218775       1 leaderelection.go:179] attempting to acquire leader lease...
I0920 16:48:48.239680       1 leaderelection.go:189] successfully acquired lease kube-system/kube-controller-manager
I0920 16:48:48.241256       1 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"bf15ea5f-73f7-11e7-9001-fe3b1ff65647", APIVersion:"v1", ResourceVersion:"7206916", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' node1.marley.xyz became leader
I0920 16:48:48.290978       1 plugins.go:106] External cloud provider specified
I0920 16:48:48.296924       1 controllermanager.go:429] Starting "deployment"
I0920 16:48:48.304181       1 controller_utils.go:994] Waiting for caches to sync for tokens controller
I0920 16:48:48.314990       1 controllermanager.go:439] Started "deployment"
I0920 16:48:48.315244       1 controllermanager.go:429] Starting "replicaset"
I0920 16:48:48.315077       1 deployment_controller.go:152] Starting deployment controller
I0920 16:48:48.316009       1 controller_utils.go:994] Waiting for caches to sync for deployment controller
I0920 16:48:48.322062       1 controllermanager.go:439] Started "replicaset"
I0920 16:48:48.322104       1 controllermanager.go:429] Starting "horizontalpodautoscaling"
I0920 16:48:48.322525       1 replica_set.go:156] Starting replica set controller
I0920 16:48:48.322664       1 controller_utils.go:994] Waiting for caches to sync for replica set controller
I0920 16:48:48.325979       1 controllermanager.go:439] Started "horizontalpodautoscaling"
I0920 16:48:48.326016       1 controllermanager.go:429] Starting "csrapproving"
I0920 16:48:48.327282       1 horizontal.go:145] Starting HPA controller
I0920 16:48:48.327407       1 controller_utils.go:994] Waiting for caches to sync for HPA controller
I0920 16:48:48.329104       1 controllermanager.go:439] Started "csrapproving"
I0920 16:48:48.329200       1 controllermanager.go:429] Starting "ttl"
I0920 16:48:48.329449       1 certificate_controller.go:110] Starting certificate controller
I0920 16:48:48.330481       1 controller_utils.go:994] Waiting for caches to sync for certificate controller
I0920 16:48:48.345619       1 controllermanager.go:439] Started "ttl"
I0920 16:48:48.345673       1 controllermanager.go:429] Starting "replicationcontroller"
I0920 16:48:48.347096       1 ttlcontroller.go:117] Starting TTL controller
I0920 16:48:48.347260       1 controller_utils.go:994] Waiting for caches to sync for TTL controller
I0920 16:48:48.348066       1 controllermanager.go:439] Started "replicationcontroller"
I0920 16:48:48.348101       1 controllermanager.go:429] Starting "daemonset"
I0920 16:48:48.348184       1 replication_controller.go:151] Starting RC controller
I0920 16:48:48.352333       1 controller_utils.go:994] Waiting for caches to sync for RC controller
I0920 16:48:48.350494       1 controllermanager.go:439] Started "daemonset"
I0920 16:48:48.352428       1 controllermanager.go:429] Starting "service"
I0920 16:48:48.350508       1 daemoncontroller.go:221] Starting daemon sets controller
I0920 16:48:48.352627       1 controller_utils.go:994] Waiting for caches to sync for daemon sets controller
E0920 16:48:48.354074       1 core.go:66] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
W0920 16:48:48.354106       1 controllermanager.go:436] Skipping "service"
I0920 16:48:48.354117       1 controllermanager.go:429] Starting "persistentvolume-binder"
I0920 16:48:48.356353       1 plugins.go:370] Loaded volume plugin "kubernetes.io/host-path"
I0920 16:48:48.356389       1 plugins.go:370] Loaded volume plugin "kubernetes.io/nfs"
I0920 16:48:48.356406       1 plugins.go:370] Loaded volume plugin "kubernetes.io/glusterfs"
I0920 16:48:48.356428       1 plugins.go:370] Loaded volume plugin "kubernetes.io/rbd"
I0920 16:48:48.356448       1 plugins.go:370] Loaded volume plugin "kubernetes.io/quobyte"
I0920 16:48:48.356470       1 plugins.go:370] Loaded volume plugin "kubernetes.io/azure-file"
I0920 16:48:48.356497       1 plugins.go:370] Loaded volume plugin "kubernetes.io/flocker"
I0920 16:48:48.356518       1 plugins.go:370] Loaded volume plugin "kubernetes.io/portworx-volume"
I0920 16:48:48.356609       1 plugins.go:370] Loaded volume plugin "kubernetes.io/scaleio"
I0920 16:48:48.356632       1 plugins.go:370] Loaded volume plugin "kubernetes.io/local-volume"
I0920 16:48:48.356654       1 plugins.go:370] Loaded volume plugin "kubernetes.io/storageos"
I0920 16:48:48.356721       1 controllermanager.go:439] Started "persistentvolume-binder"
I0920 16:48:48.356738       1 controllermanager.go:429] Starting "attachdetach"
I0920 16:48:48.356851       1 pv_controller_base.go:271] Starting persistent volume controller
I0920 16:48:48.356963       1 controller_utils.go:994] Waiting for caches to sync for persistent volume controller
I0920 16:48:48.357662       1 plugins.go:370] Loaded volume plugin "kubernetes.io/aws-ebs"
I0920 16:48:48.357689       1 plugins.go:370] Loaded volume plugin "kubernetes.io/gce-pd"
I0920 16:48:48.357713       1 plugins.go:370] Loaded volume plugin "kubernetes.io/cinder"
I0920 16:48:48.357733       1 plugins.go:370] Loaded volume plugin "kubernetes.io/portworx-volume"
I0920 16:48:48.357758       1 plugins.go:370] Loaded volume plugin "kubernetes.io/vsphere-volume"
I0920 16:48:48.357786       1 plugins.go:370] Loaded volume plugin "kubernetes.io/azure-disk"
I0920 16:48:48.357809       1 plugins.go:370] Loaded volume plugin "kubernetes.io/photon-pd"
I0920 16:48:48.357825       1 plugins.go:370] Loaded volume plugin "kubernetes.io/scaleio"
I0920 16:48:48.357842       1 plugins.go:370] Loaded volume plugin "kubernetes.io/storageos"
I0920 16:48:48.358076       1 controllermanager.go:439] Started "attachdetach"
I0920 16:48:48.358089       1 controllermanager.go:429] Starting "garbagecollector"
I0920 16:48:48.358322       1 attach_detach_controller.go:242] Starting attach detach controller
I0920 16:48:48.358349       1 controller_utils.go:994] Waiting for caches to sync for attach detach controller
E0920 16:48:48.388064       1 graph_builder.go:204] no matches for {monitoring.coreos.com v1alpha1 servicemonitors}. If {monitoring.coreos.com v1alpha1 servicemonitors} is a non-core resource (e.g. thirdparty resource, custom resource from aggregated apiserver), please note that the garbage collector doesn't support non-core resources yet. Once they are supported, object with ownerReferences referring non-existing non-core objects will be deleted by the garbage collector.
E0920 16:48:48.388132       1 graph_builder.go:204] no matches for {rook.io v1alpha1 clusters}. If {rook.io v1alpha1 clusters} is a non-core resource (e.g. thirdparty resource, custom resource from aggregated apiserver), please note that the garbage collector doesn't support non-core resources yet. Once they are supported, object with ownerReferences referring non-existing non-core objects will be deleted by the garbage collector.
E0920 16:48:48.390357       1 graph_builder.go:204] no matches for {example.controller.code-generator.k8s.io v1alpha1 foos}. If {example.controller.code-generator.k8s.io v1alpha1 foos} is a non-core resource (e.g. thirdparty resource, custom resource from aggregated apiserver), please note that the garbage collector doesn't support non-core resources yet. Once they are supported, object with ownerReferences referring non-existing non-core objects will be deleted by the garbage collector.
E0920 16:48:48.391634       1 graph_builder.go:204] no matches for {certmanager.k8s.io v1alpha1 certificates}. If {certmanager.k8s.io v1alpha1 certificates} is a non-core resource (e.g. thirdparty resource, custom resource from aggregated apiserver), please note that the garbage collector doesn't support non-core resources yet. Once they are supported, object with ownerReferences referring non-existing non-core objects will be deleted by the garbage collector.
E0920 16:48:48.392292       1 graph_builder.go:204] no matches for {monitoring.coreos.com v1alpha1 alertmanagers}. If {monitoring.coreos.com v1alpha1 alertmanagers} is a non-core resource (e.g. thirdparty resource, custom resource from aggregated apiserver), please note that the garbage collector doesn't support non-core resources yet. Once they are supported, object with ownerReferences referring non-existing non-core objects will be deleted by the garbage collector.
E0920 16:48:48.393254       1 graph_builder.go:204] no matches for {rook.io v1alpha1 pools}. If {rook.io v1alpha1 pools} is a non-core resource (e.g. thirdparty resource, custom resource from aggregated apiserver), please note that the garbage collector doesn't support non-core resources yet. Once they are supported, object with ownerReferences referring non-existing non-core objects will be deleted by the garbage collector.
E0920 16:48:48.396471       1 graph_builder.go:204] no matches for {monitoring.coreos.com v1alpha1 prometheuses}. If {monitoring.coreos.com v1alpha1 prometheuses} is a non-core resource (e.g. thirdparty resource, custom resource from aggregated apiserver), please note that the garbage collector doesn't support non-core resources yet. Once they are supported, object with ownerReferences referring non-existing non-core objects will be deleted by the garbage collector.
E0920 16:48:48.397037       1 graph_builder.go:204] no matches for {certmanager.k8s.io v1alpha1 issuers}. If {certmanager.k8s.io v1alpha1 issuers} is a non-core resource (e.g. thirdparty resource, custom resource from aggregated apiserver), please note that the garbage collector doesn't support non-core resources yet. Once they are supported, object with ownerReferences referring non-existing non-core objects will be deleted by the garbage collector.
I0920 16:48:48.397825       1 controllermanager.go:439] Started "garbagecollector"
I0920 16:48:48.397870       1 controllermanager.go:429] Starting "resourcequota"
I0920 16:48:48.397990       1 garbagecollector.go:123] Starting garbage collector controller
I0920 16:48:48.398102       1 controller_utils.go:994] Waiting for caches to sync for garbage collector controller
I0920 16:48:48.399014       1 controllermanager.go:439] Started "resourcequota"
I0920 16:48:48.399035       1 controllermanager.go:429] Starting "namespace"
I0920 16:48:48.399412       1 resource_quota_controller.go:241] Starting resource quota controller
I0920 16:48:48.399444       1 controller_utils.go:994] Waiting for caches to sync for resource quota controller
I0920 16:48:48.404795       1 controller_utils.go:1001] Caches are synced for tokens controller
I0920 16:48:48.441084       1 controllermanager.go:439] Started "namespace"
I0920 16:48:48.441135       1 controllermanager.go:429] Starting "serviceaccount"
E0920 16:48:48.443144       1 util.go:45] Metric for serviceaccount_controller already registered
I0920 16:48:48.443204       1 controllermanager.go:439] Started "serviceaccount"
I0920 16:48:48.443228       1 controllermanager.go:429] Starting "disruption"
I0920 16:48:48.445173       1 controllermanager.go:439] Started "disruption"
I0920 16:48:48.445204       1 controllermanager.go:429] Starting "cronjob"
I0920 16:48:48.446044       1 controllermanager.go:439] Started "cronjob"
I0920 16:48:48.446077       1 controllermanager.go:429] Starting "node"
W0920 16:48:48.446088       1 core.go:76] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
W0920 16:48:48.446101       1 core.go:80] Unsuccessful parsing of service CIDR : invalid CIDR address:
I0920 16:48:48.446771       1 serviceaccounts_controller.go:113] Starting service account controller
I0920 16:48:48.446942       1 controller_utils.go:994] Waiting for caches to sync for namespace controller
I0920 16:48:48.447058       1 nodecontroller.go:224] Sending events to api server.
I0920 16:48:48.447076       1 cronjob_controller.go:99] Starting CronJob Manager
I0920 16:48:48.447377       1 taint_controller.go:159] Sending events to api server.
I0920 16:48:48.447692       1 controllermanager.go:439] Started "node"
I0920 16:48:48.447724       1 controllermanager.go:429] Starting "endpoint"
I0920 16:48:48.448610       1 controllermanager.go:439] Started "endpoint"
I0920 16:48:48.448671       1 controllermanager.go:429] Starting "job"
I0920 16:48:48.448939       1 nodecontroller.go:481] Starting node controller
I0920 16:48:48.449136       1 controller_utils.go:994] Waiting for caches to sync for node controller
I0920 16:48:48.449879       1 endpoints_controller.go:136] Starting endpoint controller
I0920 16:48:48.449920       1 controller_utils.go:994] Waiting for caches to sync for endpoint controller
I0920 16:48:48.450057       1 controller_utils.go:994] Waiting for caches to sync for service account controller
I0920 16:48:48.450069       1 disruption.go:297] Starting disruption controller
I0920 16:48:48.451025       1 controller_utils.go:994] Waiting for caches to sync for disruption controller
I0920 16:48:48.451450       1 controllermanager.go:439] Started "job"
I0920 16:48:48.451483       1 controllermanager.go:429] Starting "statefulset"
I0920 16:48:48.452319       1 controllermanager.go:439] Started "statefulset"
I0920 16:48:48.452377       1 controllermanager.go:429] Starting "csrsigning"
I0920 16:48:48.453097       1 jobcontroller.go:133] Starting job controller
I0920 16:48:48.453133       1 controller_utils.go:994] Waiting for caches to sync for job controller
I0920 16:48:48.453536       1 stateful_set.go:147] Starting stateful set controller
I0920 16:48:48.453904       1 controller_utils.go:994] Waiting for caches to sync for stateful set controller
I0920 16:48:48.455909       1 controllermanager.go:439] Started "csrsigning"
W0920 16:48:48.455951       1 controllermanager.go:423] "bootstrapsigner" is disabled
W0920 16:48:48.455963       1 controllermanager.go:423] "tokencleaner" is disabled
I0920 16:48:48.455973       1 controllermanager.go:429] Starting "route"
W0920 16:48:48.455983       1 core.go:114] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
I0920 16:48:48.455992       1 core.go:130] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0920 16:48:48.456017       1 controllermanager.go:436] Skipping "route"
I0920 16:48:48.456027       1 controllermanager.go:429] Starting "podgc"
I0920 16:48:48.456403       1 certificate_controller.go:110] Starting certificate controller
I0920 16:48:48.456585       1 controller_utils.go:994] Waiting for caches to sync for certificate controller
I0920 16:48:48.457042       1 controllermanager.go:439] Started "podgc"
I0920 16:48:48.457209       1 gc_controller.go:76] Starting GC controller
I0920 16:48:48.457245       1 controller_utils.go:994] Waiting for caches to sync for GC controller
E0920 16:48:48.514539       1 actual_state_of_world.go:500] Failed to set statusUpdateNeeded to needed true because nodeName="node1.marley.xyz"  does not exist
E0920 16:48:48.514638       1 actual_state_of_world.go:514] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true because nodeName="node1.marley.xyz"  does not exist
I0920 16:48:48.566143       1 controller_utils.go:1001] Caches are synced for HPA controller
I0920 16:48:48.566259       1 controller_utils.go:1001] Caches are synced for certificate controller
I0920 16:48:48.568839       1 controller_utils.go:1001] Caches are synced for namespace controller
I0920 16:48:48.577794       1 controller_utils.go:1001] Caches are synced for TTL controller
I0920 16:48:48.578540       1 controller_utils.go:1001] Caches are synced for service account controller
I0920 16:48:48.580513       1 controller_utils.go:1001] Caches are synced for certificate controller
I0920 16:48:48.604307       1 controller_utils.go:1001] Caches are synced for resource quota controller
I0920 16:48:48.616295       1 controller_utils.go:1001] Caches are synced for deployment controller
I0920 16:48:48.627056       1 controller_utils.go:1001] Caches are synced for replica set controller
I0920 16:48:48.652007       1 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"example-foo", UID:"490c725e-9e23-11e7-8313-fe3b1ff65647", APIVersion:"extensions", ResourceVersion:"7206775", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set example-foo-3626207525 to 1
I0920 16:48:48.660616       1 controller_utils.go:1001] Caches are synced for attach detach controller
I0920 16:48:48.665948       1 controller_utils.go:1001] Caches are synced for node controller
I0920 16:48:48.666030       1 controller_utils.go:1001] Caches are synced for endpoint controller
I0920 16:48:48.666431       1 controller_utils.go:1001] Caches are synced for disruption controller
I0920 16:48:48.666451       1 disruption.go:305] Sending events to api server.
I0920 16:48:48.666533       1 controller_utils.go:1001] Caches are synced for RC controller
I0920 16:48:48.666775       1 controller_utils.go:1001] Caches are synced for daemon sets controller
I0920 16:48:48.681173       1 controller_utils.go:1001] Caches are synced for job controller
I0920 16:48:48.681326       1 controller_utils.go:1001] Caches are synced for stateful set controller
I0920 16:48:48.687872       1 controller_utils.go:1001] Caches are synced for persistent volume controller
I0920 16:48:48.698464       1 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"example-nginx", UID:"2e09ca2e-9d8f-11e7-8313-fe3b1ff65647", APIVersion:"extensions", ResourceVersion:"7206848", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set example-nginx-1402660473 to 0
I0920 16:48:48.707192       1 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"example-foo-3626207525", UID:"930fb38f-9e23-11e7-8313-fe3b1ff65647", APIVersion:"extensions", ResourceVersion:"7206921", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: example-foo-3626207525-4c6j6
I0920 16:48:48.708148       1 controller_utils.go:1001] Caches are synced for garbage collector controller
I0920 16:48:48.708200       1 garbagecollector.go:132] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0920 16:48:48.732586       1 nodecontroller.go:525] NodeController observed a new Node: "node1.marley.xyz"
I0920 16:48:48.732975       1 nodecontroller.go:542] Initializing eviction metric for zone:
W0920 16:48:48.733739       1 nodecontroller.go:877] Missing timestamp for Node node1.marley.xyz. Assuming now as a timestamp.
I0920 16:48:48.733818       1 nodecontroller.go:793] NodeController detected that zone  is now in state Normal.
I0920 16:48:48.751005       1 controller_utils.go:1001] Caches are synced for GC controller
I0920 16:48:48.753478       1 taint_controller.go:182] Starting NoExecuteTaintManager
E0920 16:48:48.768231       1 garbagecollector.go:169] Ignore syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"extensions/v1beta1", Kind:"Deployment", Name:"example-nginx", UID:"2e09ca2e-9d8f-11e7-8313-fe3b1ff65647", Controller:(*bool)(0xc4216f5278), BlockOwnerDeletion:(*bool)(0xc4216f5279)}, Namespace:"default"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{(*garbagecollector.node)(0xc420d948f0):struct {}{}}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"example.controller.code-generator.k8s.io/v1alpha1", Kind:"NGINX", Name:"example-nginx", UID:"b3048820-9d8b-11e7-8313-fe3b1ff65647", Controller:(*bool)(0xc42189e762), BlockOwnerDeletion:(*bool)(0xc42189e763)}}}: unable to get REST mapping for example.controller.code-generator.k8s.io/v1alpha1/NGINX. If example.controller.code-generator.k8s.io/v1alpha1/NGINX is a non-core resource (e.g. thirdparty resource, custom resource from aggregated apiserver), please note that the garbage collector doesn't support non-core resources yet. Once they are supported, object with ownerReferences referring non-existing non-core objects will be deleted by the garbage collector. If example.controller.code-generator.k8s.io/v1alpha1/NGINX is an invalid resource, then you should manually remove ownerReferences that refer example.controller.code-generator.k8s.io/v1alpha1/NGINX objects.
E0920 16:48:48.792422       1 runtime.go:66] Observed a panic: &errors.errorString{s:"descriptor Desc{fqName: \"garbage_collector_operation_example:controller:code-generator:k8s:io_v1alpha1_rate_limiter_use\", help: \"A metric measuring the saturation of the rate limiter for garbage_collector_operation_example:controller:code-generator:k8s:io_v1alpha1\", constLabels: {}, variableLabels: []} is invalid: \"garbage_collector_operation_example:controller:code-generator:k8s:io_v1alpha1_rate_limiter_use\" is not a valid metric name"} (descriptor Desc{fqName: "garbage_collector_operation_example:controller:code-generator:k8s:io_v1alpha1_rate_limiter_use", help: "A metric measuring the saturation of the rate limiter for garbage_collector_operation_example:controller:code-generator:k8s:io_v1alpha1", constLabels: {}, variableLabels: []} is invalid: "garbage_collector_operation_example:controller:code-generator:k8s:io_v1alpha1_rate_limiter_use" is not a valid metric name)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/asm_amd64.s:514
/usr/local/go/src/runtime/panic.go:489
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus/registry.go:353
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus/registry.go:152
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/metrics/util.go:54
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/metrics/util.go:61
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/rate_limiter_helper.go:57
/usr/local/go/src/sync/once.go:44
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/rate_limiter_helper.go:59
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/operations.go:67
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:297
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:165
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:150
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:136
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:97
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:98
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:52
/usr/local/go/src/runtime/asm_amd64.s:2197
panic: descriptor Desc{fqName: "garbage_collector_operation_example:controller:code-generator:k8s:io_v1alpha1_rate_limiter_use", help: "A metric measuring the saturation of the rate limiter for garbage_collector_operation_example:controller:code-generator:k8s:io_v1alpha1", constLabels: {}, variableLabels: []} is invalid: "garbage_collector_operation_example:controller:code-generator:k8s:io_v1alpha1_rate_limiter_use" is not a valid metric name [recovered]
	panic: descriptor Desc{fqName: "garbage_collector_operation_example:controller:code-generator:k8s:io_v1alpha1_rate_limiter_use", help: "A metric measuring the saturation of the rate limiter for garbage_collector_operation_example:controller:code-generator:k8s:io_v1alpha1", constLabels: {}, variableLabels: []} is invalid: "garbage_collector_operation_example:controller:code-generator:k8s:io_v1alpha1_rate_limiter_use" is not a valid metric name

goroutine 1043 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x126
panic(0x4225d00, 0xc421df6cc0)
	/usr/local/go/src/runtime/panic.go:489 +0x2cf
k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).MustRegister(0xc42039a3c0, 0xc421df6c60, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus/registry.go:353 +0x92
k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus.MustRegister(0xc421df6c60, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/prometheus/client_golang/prometheus/registry.go:152 +0x53
k8s.io/kubernetes/pkg/util/metrics.registerRateLimiterMetric(0xc421de4e10, 0x4d, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/metrics/util.go:54 +0x319
k8s.io/kubernetes/pkg/util/metrics.RegisterMetricAndTrackRateLimiterUsage(0xc421de4e10, 0x4d, 0x8facfe0, 0xc421d0a2c0, 0x3, 0xc421de4e10)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/metrics/util.go:61 +0x39
k8s.io/kubernetes/pkg/controller/garbagecollector.(*RegisteredRateLimiter).registerIfNotPresent.func1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/rate_limiter_helper.go:57 +0x223
sync.(*Once).Do(0xc4202b69d0, 0xc421dbb820)
	/usr/local/go/src/sync/once.go:44 +0xbe
k8s.io/kubernetes/pkg/controller/garbagecollector.(*RegisteredRateLimiter).registerIfNotPresent(0xc420332370, 0xc42167d440, 0x28, 0xc42167d469, 0x8, 0xc421cdd020, 0x4b30b50, 0x1b)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/rate_limiter_helper.go:59 +0x13a
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).getObject(0xc4200a00e0, 0xc42167d440, 0x31, 0xc4215f9268, 0x3, 0xc4215f9290, 0xb, 0xc421601ef0, 0x24, 0xc42189e4d8, ...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/operations.go:67 +0x14b
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).attemptToDeleteItem(0xc4200a00e0, 0xc42108ac30, 0xc420e444c0, 0x47acd60)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:297 +0x136
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).attemptToDeleteWorker(0xc4200a00e0, 0x50d800)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:165 +0xeb
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).runAttemptToDeleteWorker(0xc4200a00e0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:150 +0x2b
k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).(k8s.io/kubernetes/pkg/controller/garbagecollector.runAttemptToDeleteWorker)-fm()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:136 +0x2a
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc421b9db00)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:97 +0x5e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc421b9db00, 0x3b9aca00, 0x0, 0x1, 0xc4200a5f80)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:98 +0xbd
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc421b9db00, 0x3b9aca00, 0xc4200a5f80)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:52 +0x4d
created by k8s.io/kubernetes/pkg/controller/garbagecollector.(*GarbageCollector).Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/garbagecollector/garbagecollector.go:136 +0x2e4

Environment:

  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration**: v1.7.2+coreos.0
  • OS (e.g. from /etc/os-release): CoreOS 1465.7.0
  • Kernel (e.g. uname -a): 4.12.10-coreos
  • Install tools:
  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 22 (22 by maintainers)

Commits related to this issue

Most upvoted comments

updated the linked PR