kubernetes: apiserver allows duplicate service port

Is this a BUG REPORT or FEATURE REQUEST?: /kind bug

What happened: I was able to add a conflicting service port to an existing service. Attempting to remove the port by modifying the yml and applying fails with The Service "atest" is invalid: spec.ports: Required value

Note: Kubernetes 1.8 will not accept a service.yml with conflicting ports if creating for the first time

What you expected to happen: The server should reject the request and (in this case) kubectl should error.

How to reproduce it (as minimally and precisely as possible):

  • Created service.yml with a single port definition and successfully applied it
apiVersion: v1
kind: Service
metadata:
  name: atest
spec:
  ports:
  - port: 8080
    targetPort: 80
    protocol: TCP
    name: http
  • Updated service.yml with an additional identical port definition and successfully applied it
apiVersion: v1
kind: Service
metadata:
  name: atest
spec:
  ports:
  - port: 8080
    targetPort: 80
    protocol: TCP
    name: http
  - port: 8080
    targetPort: 80
    protocol: TCP
    name: dummy
  • Updated service.yml by removing the additional port but applying it failed

Anything else we need to know?: On adding the additional port, the service definition (on the server) still only has the original port but kubectl.kubernetes.io/last-applied-configuration is now in an inconsistent state making kubectl attempt to remove the port when it calculates its diff for the patch request.

Environment:

  • Kubernetes version (use kubectl version): 1.8
  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): Ubuntu 16.04
  • Kernel (e.g. uname -a): 4.4
  • Install tools:
  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 42 (16 by maintainers)

Most upvoted comments

I’ve been investigating this issue for the past several days. The root cause appears to be a combination of the definition of service ports (kubernetes/api/openapi-spec/swagger.json:80248) and how patches are applied (strategicpatch.StrategicMergeMapPatch, called from kubernetes/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/patch.go, applyPatchToObject).

A service may expose multiple ports. The merge process uses the port number as the key; it’s not legal to define multiple ports within a service with the same port number. This can be verified by applying (or creating) the following service-bad.yml:

apiVersion: v1
kind: Service
metadata:
  name: atest
spec:
  ports:
  - port: 8089
    targetPort: 89
    protocol: TCP
    name: http1
  - port: 8089
    targetPort: 89
    protocol: TCP
    name: dummy2

Attempting to apply it correctly fails:

$ kubectl apply -f service-bad.yaml
The Service "atest" is invalid: spec.ports[1]: Duplicate value: core.ServicePort{Name:"", Protocol:"TCP", Port:8089, TargetPort:intstr.IntOrString{Type:0, IntVal:0, StrVal:""}, NodePort:0}

However, if a good service (omitting the last four lines of the above .yml) is first applied, and then patched using the bad service, it succeeds:

$ kubectl apply -f ~/service-good.yaml
service "atest" created
$ kubectl apply -f ~/service-bad.yaml
service "atest" configured

In this case, when we call the ValidateServiceUpdate in kubernetes/pkg/apis/core/validation/validation.go, it calls ValidateService as it should, and that succeeds! Why?

Applying a patch (which is what kubectl apply does) is different from creating a new object. A patch is a delta; it may not be a valid object in its own right, so we don’t attempt to validate it. However, the merge code (in mergeMap in the strategic merge code) doesn’t know anything about “old” vs. “new”; it just applies the items in the patch according to the rules in the schema (which in this case specifies the port number as the key). Therefore, it only leaves one of the two port definitions intact; the result amounts to this:

apiVersion: v1
kind: Service
metadata:
  name: atest
spec:
  ports:
  - port: 8089
    targetPort: 89
    protocol: TCP
    name: dummy2

If we look at the data structure passed to the validator (removing uninteresting information), it looks like this:

core.Service{
	ObjectMeta:v1.ObjectMeta{Name:"atestaaaaaaaaaaaa",
		Annotations:map[string]string{
			"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"name\":\"atestaaaaaaaaaaaa\",\"namespace\":\"default\"},\"spec\":{\"ports\":[{\"name\":\"http1111111111111\",\"port\":8089,\"protocol\":\"TCP\",\"targetPort\":89},{\"name\":\"dummy22222222222222\",\"port\":8089,\"protocol\":\"TCP\",\"targetPort\":89}]}}\n"
		}
	}
	Spec:core.ServiceSpec{
		Type:"ClusterIP",
		Ports:[]core.ServicePort{
			core.ServicePort{
				Name:"http1111111111111",
				Protocol:"TCP",
				Port:8089,
				TargetPort:intstr.IntOrString{
					Type:0,
					IntVal:89,
					StrVal:""
				},
				NodePort:0
			}
		}
	}
}

Notice that the actual configuration (with one port definition) is not consistent with the JSON body in the annotation. This has impact later on; we’ll get to that.

The next problem is that if we try to update the service by applying a good configuration, it fails:

$ kubectl apply -f ~/service-good.yaml
The Service "atestaaaaaaaaaaaa" is invalid: spec.ports: Required value

What does this mean? It means that the attempt to update the service has left it with no port definition. Why is that, since our configuration does specify a port? It’s because of how kubectl apply works: it generates a patch against the current configuration. But note above that the annotation (the JSON representation of the configuration) is inconsistent with the working configuration. If we trace the the request through kubectl apply --v=10, we see this exchange:

I0213 15:07:18.269965    2023 round_trippers.go:405] GET http://localhost:8080/api/v1/namespaces/default/services/atestaaaaaaaaaaaa 200 OK in 2 milliseconds
I0213 15:07:18.270091    2023 request.go:991] Response Body: {"kind":"Service","apiVersion":"v1","metadata":{"name":"atestaaaaaaaaaaaa","namespace":"default","selfLink":"/api/v1/namespaces/default/services/atestaaaaaaaaaaaa","uid":"6e3ade40-10f9-11e8-b9ba-54e1ad305916","resourceVersion":"270","creationTimestamp":"2018-02-13T20:06:51Z","annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"name\":\"atestaaaaaaaaaaaa\",\"namespace\":\"default\"},\"spec\":{\"ports\":[{\"name\":\"http1111111111111\",\"port\":8089,\"protocol\":\"TCP\",\"targetPort\":89},{\"name\":\"dummy22222222222222\",\"port\":8089,\"protocol\":\"TCP\",\"targetPort\":89}]}}\n"}},"spec":{"ports":[{"name":"http1111111111111","protocol":"TCP","port":8089,"targetPort":89}],"clusterIP":"10.0.0.64","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}}
I0213 15:07:18.270453    2023 request.go:991] Request Body: {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"name\":\"atestaaaaaaaaaaaa\",\"namespace\":\"default\"},\"spec\":{\"ports\":[{\"name\":\"http1111111111111\",\"port\":8089,\"protocol\":\"TCP\",\"targetPort\":89}]}}\n"}},"spec":{"$setElementOrder/ports":[{"port":8089}],"ports":[{"$patch":"delete","port":8089}]}}
I0213 15:07:18.270503    2023 round_trippers.go:386] curl -k -v -XPATCH  -H "User-Agent: kubectl/v1.7.3 (linux/amd64) kubernetes/2c2fe6e" -H "Content-Type: application/strategic-merge-patch+json" -H "Accept: application/json" http://localhost:8080/api/v1/namespaces/default/services/atestaaaaaaaaaaaa
I0213 15:07:18.272148    2023 round_trippers.go:405] PATCH http://localhost:8080/api/v1/namespaces/default/services/atestaaaaaaaaaaaa 422 Unprocessable Entity in 1 milliseconds

Notice here that the response includes both the annotation and the representation of the actual structure. According to kubernetes/pkg/kubectl/apply.go, GetOriginalConfiguration uses the annotation as the base for generating a patch. It presumably (I haven’t looked into exactly this part yet) sees two ports, notes that the desired configuration is for just one port, so it wants to delete one of the ports. Thus, the patch it sends back requests that port 8089 be deleted. The patch code in the API server proceeds to delete both ports, finds that the configuration is illegal, and replies with an error.

So, what’s going on here? There are a few things, in no particular order:

  1. The patch is not properly validated. What “properly” means in the context of a patch is not clear, since a patch is not a complete object in its own right, but the patch does contain data that is affirmatively illegal (multiple ports with the same port number).

  2. The state of the underlying object (this doesn’t appear to be specific to services) gets out of sync with the service annotation. It wouldn’t get here if the patch were correctly validated, but if that is problematic, a final acceptance check might be to compare the annotation against the actual state and reject the patch if they don’t match up.

  3. kubectl (client) trusts the annotation without checking it against the object. Validation really needs to happen in the server, but in this case the client can check (after the fact) for an inconsistency and print a more useful error message.

The API people should decide how this would best be fixed.

Is this still a problem if SSA is used (kubectl apply --server-side)? If so, what is the failure point now?

I confirm this still exists and the analysis by @RobertKrawitz is excellent and very plausible.

WORSE – it will consider two ports as identical even if they differ in protocol. Looking at types.go:

   // The list of ports that are exposed by this service.
    // More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
    // +patchMergeKey=port
    // +patchStrategy=merge
    // +listType=map
    // +listMapKey=port                                                                                                                                        
    // +listMapKey=protocol
    Ports []ServicePort `json:"ports,omitempty" patchStrategy:"merge" patchMergeKey:"port" protobuf:"bytes,1,rep,name=ports"`

So dear api-machinery people – how should we be approaching this? The correct key is really the protocol + port (at minimum) but more even then the original problem exists. This should really be an error…