kubernetes: Service is not within the service CIDR please recreate
What happened: All the services are asking to be recreated because the service CIDR changed after upgrading from 1.16.3 ro 1.17.0
{
"_index": "logstash-2019.12.20",
"_type": "_doc",
"_id": "kG2RJW8BRYpt1XnHuKKz",
"_version": 1,
"_score": null,
"_source": {
"log": "E1220 23:08:20.390973 1 repair.go:247] the cluster IP 10.101.249.43 for service filebeat-metrics/logging is not within the service CIDR 10.96.0.0/12; please recreate\n",
"stream": "stderr",
"docker": {
"container_id": "b4f07a8b25dfcc6288e844b7c8c862f91597e2eefd128c5f92e30b75f7e74fe4"
},
"kubernetes": {
"container_name": "kube-apiserver",
"namespace_name": "kube-system",
"pod_name": "kube-apiserver-MY-HOST",
"container_image": "k8s.gcr.io/kube-apiserver:v1.17.0",
"container_image_id": "docker-pullable://k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662",
"pod_id": "4b59729f-b76f-4d6d-aa12-b42ad485a78b",
"host": "MY-HOST",
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
},
"master_url": "https://10.96.0.1:443/api",
"namespace_id": "be0de548-c2bd-4279-9c94-190e52600373",
"namespace_labels": {
"field_cattle_io/projectId": "p-dlv86"
}
},
"@timestamp": "2019-12-20T23:08:20.391038461+00:00",
"tag": "kubernetes.var.log.containers.kube-apiserver-MY-HOST_kube-system_kube-apiserver-b4f07a8b25dfcc6288e844b7c8c862f91597e2eefd128c5f92e30b75f7e74fe4.log"
},
"fields": {
"@timestamp": [
"2019-12-20T23:08:20.391Z"
]
},
"highlight": {
"log": [
"E1220 23:08:20.390973 1 repair.go:247] the cluster IP 10.101.249.43 for service filebeat-metrics/logging is not within the service CIDR 10.96.0.0/12; @kibana-highlighted-field@please@/kibana-highlighted-field@ @kibana-highlighted-field@recreate@/kibana-highlighted-field@"
],
"stream": [
"@kibana-highlighted-field@stderr@/kibana-highlighted-field@"
],
"kubernetes.pod_name": [
"@kibana-highlighted-field@kube@/kibana-highlighted-field@-@kibana-highlighted-field@apiserver@/kibana-highlighted-field@-@kibana-highlighted-field@MY-HOST@/kibana-highlighted-field@"
]
},
"sort": [
1576883300391
]
}
What you expected to happen:
Smooth migration from 1.16.3
to 1.17.0
, which would not break service CIDR.
How to reproduce it (as minimally and precisely as possible): Do an upgrade from version 1.16.3 to 1.17.0
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
):
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-14T04:24:29Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration:
baremetal
- OS (e.g:
cat /etc/os-release
):
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
- Kernel (e.g.
uname -a
):
Linux MY-HOST 4.15.0-72-generic #81-Ubuntu SMP Tue Nov 26 12:20:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
- Install tools:
- Network plugin and version (if this is a network-related bug):
- Others:
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 34 (25 by maintainers)
We should:
10.101.249.43 belongs to the service rubnet range
10.96.0.0/12 = 10.96.0.1 - 10.111.255.254
at minimum the error message is wrong
/kind bug
this is a bug in core: https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/core/service/ipallocator/controller/repair.go#L244-L247
this call in the repair controller returns
ErrNotInRange
for an IP that is actually in the CIDR range: https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/core/service/ipallocator/controller/repair.go#L228 also indicates a missing unit test case.latest commits: https://github.com/kubernetes/kubernetes/commits/master/pkg/registry/core/service/ipallocator
this might have regressed with the dual-stack changes? https://github.com/kubernetes/kubernetes/pull/79386 cc @khenidak
@kubernetes/sig-network-bugs /remove-triage support needs-information /sig network /remove-sig cluster-lifecycle
ah, ok. If they are using kubeadm, that default also did not change between 1.16.0 and 1.17.0: