kubernetes: Unable to remove old endpoints from kubernetes service: StorageError

os : Ubuntu 16.04.2 LTS
uname: Linux node01 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
docker: 1.13.1
kubeadm: v1.14.0 on katacoda.com

the kube-apiserver’s log print this

controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.16.1.6, ResourceVersion: 0, AdditionalErrorMsg:

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 23
  • Comments: 69 (24 by maintainers)

Most upvoted comments

Your cluster was not clean before you installing kubernetes cluster, you should deployed kubernetes on your env before. Clean you ENV, and re-deploy again could resolve this issue.

Small probability is caused by ipv6 try this

cat > /etc/sysctl.d/k8s-ipv6.conf <<EOF
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
EOF
sysctl --system

If not works, could try this:

cat > /etc/sysctl.d/k8s-ipv6.conf <EOF
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
EOF
sysctl --system

It is incredible that this problem has existed for more than 2 years and has not been solved. I encountered this error when deploying k8s using binary files, I tried v1.18 - v1.21, and I checked all the search results on google. Unfortunately, it took me 3 days to still not be able to solve this problem.

hmm the error is just informative, it happens for a new installation, it doesn’t exist, throws the error and later the lease is added

E1006 21:00:24.824254       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.18.0.2, ResourceVersion: 0, AdditionalErrorMsg: 
I1006 21:00:24.842311       1 controller.go:611] quota admission added evaluator for: namespaces
I1006 21:00:24.859171       1 shared_informer.go:247] Caches are synced for node_authorizer 
I1006 21:00:24.922297       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I1006 21:00:24.922415       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1006 21:00:24.922495       1 cache.go:39] Caches are synced for AvailableConditionController controller
I1006 21:00:24.922664       1 apf_controller.go:299] Running API Priority and Fairness config worker
I1006 21:00:24.922750       1 shared_informer.go:247] Caches are synced for crd-autoregister 
I1006 21:00:24.922761       1 cache.go:39] Caches are synced for autoregister controller
I1006 21:00:25.821256       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1006 21:00:25.821316       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1006 21:00:25.836440       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I1006 21:00:25.845692       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I1006 21:00:25.845747       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1006 21:00:26.037974       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1006 21:00:26.052262       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1006 21:00:26.185425       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [172.18.0.2]

this error in the log is a red herring, most of the cases reported are related to deployment issues, wrong IP family, KIND cluster restarts, …

I’m going to close this to avoid converting this issue in a magnet, TL;DR Unable to remove old endpoints from kubernetes service: is not a problem, if apiserver doesn’t start check further in the logs, the reason has to be there /close

not sure whats causing this. I had virtualbox VMs (3) on windows 10 amd64. and this issue happened after restarts, spent days to figure things out, including trying to set kubernetes up in different distros of linux.

then i switched to vmware player, no issue so far. Again, my scenario is arbitrary, if anyone has the same setup as me, probably give vmware a try.

та же проблема

The issue in my case was k8s api-server was using ipv6 address ([::1]) which was not working. I disabled ipv6 and it fixed the issue for me

## Disable ipv6
cat >> /etc/sysctl.conf <<EOF
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
EOF
sysctl -p