kubernetes: Apply service changes fails when load balancer is removed
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): ‘remove load balancer’
BUG REPORT
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.7", GitCommit:"a2cba278cba1f6881bb0a7704d9cac6fca6ed435", GitTreeState:"clean", BuildDate:"2016-09-12T23:15:30Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.7", GitCommit:"a2cba278cba1f6881bb0a7704d9cac6fca6ed435", GitTreeState:"clean", BuildDate:"2016-09-12T23:08:43Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Environment:
- Cloud provider or hardware configuration: Google Cloud (GKE)
- OS (e.g. from /etc/os-release):
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.4.0
PRETTY_NAME="Alpine Linux v3.4"
HOME_URL="http://alpinelinux.org"
BUG_REPORT_URL="http://bugs.alpinelinux.org"
- Kernel (e.g.
uname -a):
Linux ga-redis-cache 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 Linux
- Install tools: Google Cloud SDK
- Others:
What happened:
I have a service that I configured with an external load balancer with specific firewall rules.
apiVersion: v1
kind: Service
metadata:
name: ga-redis-cache
labels:
service: redis
purpose: cache
spec:
ports:
- port: 6379
protocol: TCP
selector:
service: redis
purpose: cache
type: LoadBalancer
loadBalancerSourceRanges:
# IP address changed for security; real IP address is routeable
- 192.168.1.1/32
We were ready to remove the external access so I changed the configuration to remove the load balancer:
apiVersion: v1
kind: Service
metadata:
name: ga-redis-cache
labels:
service: redis
purpose: cache
spec:
ports:
- port: 6379
protocol: TCP
selector:
service: redis
purpose: cache
And ran kubectl apply -f and got an error about an invalid port:
proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]
proto: tag has too few fields: "-"
proto: no coders for struct *reflect.rtype
proto: no encoder for sec int64 [GetProperties]
proto: no encoder for nsec int32 [GetProperties]
proto: no encoder for loc *time.Location [GetProperties]
proto: no encoder for Time time.Time [GetProperties]
proto: no coders for intstr.Type
proto: no encoder for Type intstr.Type [GetProperties]
The Service "ga-redis-cache" is invalid.
spec.ports[0].nodePort: Invalid value: 30179: may not be used when `type` is 'ClusterIP'
What you expected to happen: I expected Kubernetes to remove the load balancer from the service and convert back to internal balancing for access from the cluster only.
How to reproduce it (as minimally and precisely as possible): See configuration files above.
Anything else do we need to know: I tried this with another service and got repeated result, just a different port:
proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]
proto: tag has too few fields: "-"
proto: no coders for struct *reflect.rtype
proto: no encoder for sec int64 [GetProperties]
proto: no encoder for nsec int32 [GetProperties]
proto: no encoder for loc *time.Location [GetProperties]
proto: no encoder for Time time.Time [GetProperties]
proto: no coders for intstr.Type
proto: no encoder for Type intstr.Type [GetProperties]
The Service "keylime-redis-cache" is invalid.
spec.ports[0].nodePort: Invalid value: 32537: may not be used when `type` is 'ClusterIP'
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 12
- Comments: 62 (31 by maintainers)
Commits related to this issue
- [STABLE/RABBITMQ-HA] spec.clusterIP: may not be set to 'None' for LoadBalancer `Service is invalid: spec.clusterIP: Invalid value: "None": may not be set to 'None' for LoadBalancer services` Relates... — committed to grebois/charts by grebois 6 years ago
- Bugfix: use workaround for Helm issue #33766 Downgrading a service from `LoadBalancer` to `ClusterIP` type fails. A workaround is to set an empty `nodePort` field under `spec.ports`. See: https://gi... — committed to RobertJovanov/mob-nsapp by De117 5 years ago
As we use this config in an automated helm build for plenty of services, deleting each one of the releases wasn’t an option. I found out that simply setting a nodePort to an empty value does the job, so changing LoadBalancer to ClusterIP config required following configuration
In my case “kubectl apply -f <file.yml> --force” worked.
FTR: This is fixed is v1.20.x
I’m running into this with Helm Same use case downgrading an existing LoadBalancer Service to a ClusterIP
Our solution was also to delete the old service before doing the helm upgrade.
@AdoHe I just ran into this again. Any further thoughts?
I’m now running:
To resolve this I ended up having to
deletethe server and thenapplythe new configuration.@GamiiisKth Following your suggestions, I can get it cleared up, but it still seems like a hack that I have to take two steps to get there. Unless I’m mistaken that’s the same resolution as @piotr-napadlek mentioned above.
First I created the service applying this manifest:
Then I
kubectl apply -fan updated manifest to empty the load balancer settings:Now I can
applymy final desired manifest (without any load balancer details):The issue is caused by defaulting: API server defaults field
spec.ports[0].nodePort. And this field is not managed byapply. So when you removetype: LoadBalancer,applydoesn’t removespec.ports[0].nodePort. Then the validation will complain about that.The following is an example after defaulting.
You can use
kubectl editto removetype: LoadBalancer,loadBalancerSourceRangesandspec.ports[0].nodePort. It should work. Or you can dokubectl get-modify-kubectl replace.