kubernetes: Apply service changes fails when load balancer is removed

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): ‘remove load balancer’


BUG REPORT

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.7", GitCommit:"a2cba278cba1f6881bb0a7704d9cac6fca6ed435", GitTreeState:"clean", BuildDate:"2016-09-12T23:15:30Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.7", GitCommit:"a2cba278cba1f6881bb0a7704d9cac6fca6ed435", GitTreeState:"clean", BuildDate:"2016-09-12T23:08:43Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Google Cloud (GKE)
  • OS (e.g. from /etc/os-release):
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.4.0
PRETTY_NAME="Alpine Linux v3.4"
HOME_URL="http://alpinelinux.org"
BUG_REPORT_URL="http://bugs.alpinelinux.org"
  • Kernel (e.g. uname -a):
Linux ga-redis-cache 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 Linux
  • Install tools: Google Cloud SDK
  • Others:

What happened:

I have a service that I configured with an external load balancer with specific firewall rules.

apiVersion: v1
kind: Service
metadata:
  name: ga-redis-cache
  labels:
    service: redis
    purpose: cache
spec:
  ports:
  - port: 6379
    protocol: TCP
  selector:
    service: redis
    purpose: cache
  type: LoadBalancer
  loadBalancerSourceRanges:
  # IP address changed for security; real IP address is routeable
  - 192.168.1.1/32

We were ready to remove the external access so I changed the configuration to remove the load balancer:

apiVersion: v1
kind: Service
metadata:
  name: ga-redis-cache
  labels:
    service: redis
    purpose: cache
spec:
  ports:
  - port: 6379
    protocol: TCP
  selector:
    service: redis
    purpose: cache

And ran kubectl apply -f and got an error about an invalid port:

proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]
proto: tag has too few fields: "-"
proto: no coders for struct *reflect.rtype
proto: no encoder for sec int64 [GetProperties]
proto: no encoder for nsec int32 [GetProperties]
proto: no encoder for loc *time.Location [GetProperties]
proto: no encoder for Time time.Time [GetProperties]
proto: no coders for intstr.Type
proto: no encoder for Type intstr.Type [GetProperties]
The Service "ga-redis-cache" is invalid.
spec.ports[0].nodePort: Invalid value: 30179: may not be used when `type` is 'ClusterIP'

What you expected to happen: I expected Kubernetes to remove the load balancer from the service and convert back to internal balancing for access from the cluster only.

How to reproduce it (as minimally and precisely as possible): See configuration files above.

Anything else do we need to know: I tried this with another service and got repeated result, just a different port:

proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]
proto: tag has too few fields: "-"
proto: no coders for struct *reflect.rtype
proto: no encoder for sec int64 [GetProperties]
proto: no encoder for nsec int32 [GetProperties]
proto: no encoder for loc *time.Location [GetProperties]
proto: no encoder for Time time.Time [GetProperties]
proto: no coders for intstr.Type
proto: no encoder for Type intstr.Type [GetProperties]
The Service "keylime-redis-cache" is invalid.
spec.ports[0].nodePort: Invalid value: 32537: may not be used when `type` is 'ClusterIP'

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 12
  • Comments: 62 (31 by maintainers)

Commits related to this issue

Most upvoted comments

As we use this config in an automated helm build for plenty of services, deleting each one of the releases wasn’t an option. I found out that simply setting a nodePort to an empty value does the job, so changing LoadBalancer to ClusterIP config required following configuration

apiVersion: v1
kind: Service
metadata:
  name: boarding-service
spec:
  type: ClusterIP
  ports:
  - name: http
    protocol: TCP
    port: 8080
    nodePort:
  selector:
    app: boarding-service

In my case “kubectl apply -f <file.yml> --force” worked.

FTR: This is fixed is v1.20.x

I’m running into this with Helm Same use case downgrading an existing LoadBalancer Service to a ClusterIP

❯ helm upgrade --install --wait sonar-cicd charts/sonarqube --values charts/sonarqube/custom.values.yaml
Error: UPGRADE FAILED: Service "sonar-cicd-sonarqube" is invalid: spec.ports[0].nodePort: Invalid value: 32057: may not be used when `type` is 'ClusterIP'

Our solution was also to delete the old service before doing the helm upgrade.

@AdoHe I just ran into this again. Any further thoughts?

I’m now running:

Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:48:38Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:32:42Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

To resolve this I ended up having to delete the server and then apply the new configuration.

@GamiiisKth Following your suggestions, I can get it cleared up, but it still seems like a hack that I have to take two steps to get there. Unless I’m mistaken that’s the same resolution as @piotr-napadlek mentioned above.

First I created the service applying this manifest:

apiVersion: v1
kind: Service
metadata:
  name: test-lb-change
spec:
  ports:
  - port: 6379
    protocol: TCP
  type: LoadBalancer
  loadBalancerSourceRanges:
  - 8.8.8.8/32

Then I kubectl apply -f an updated manifest to empty the load balancer settings:

apiVersion: v1
kind: Service
metadata:
  name: test-lb-change
spec:
  ports:
  - port: 6379
    protocol: TCP
    nodePort:
  loadBalancerSourceRanges:

Now I can apply my final desired manifest (without any load balancer details):

apiVersion: v1
kind: Service
metadata:
  name: test-lb-change
spec:
  ports:
  - port: 6379
    protocol: TCP

The issue is caused by defaulting: API server defaults field spec.ports[0].nodePort. And this field is not managed by apply. So when you remove type: LoadBalancer, apply doesn’t remove spec.ports[0].nodePort. Then the validation will complain about that.

The following is an example after defaulting.

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"test-app","service":"redis"},"name":"test-redis-cache","namespace":"default"},"spec":{"loadBalancerSourceRanges":["8.8.8.8/32"],"ports":[{"port":6379,"protocol":"TCP"}],"selector":{"app":"test-app","service":"redis"},"type":"LoadBalancer"}}
  creationTimestamp: 2018-01-29T18:20:07Z
  labels:
    app: test-app
    service: redis
  name: test-redis-cache
  namespace: default
  resourceVersion: "5623881"
  selfLink: /api/v1/namespaces/default/services/test-redis-cache
  uid: 08a77b02-0521-11e8-8149-42010a800002
spec:
  clusterIP: 10.0.190.43
  externalTrafficPolicy: Cluster
  loadBalancerSourceRanges:
  - 8.8.8.8/32
  ports:
  - nodePort: 31854 # this got defaulted.
    port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app: test-app
    service: redis
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 35.188.39.12

You can use kubectl edit to remove type: LoadBalancer, loadBalancerSourceRanges and spec.ports[0].nodePort. It should work. Or you can do kubectl get-modify-kubectl replace.