kubernetes: Umbrella Issue: (Known Issue) Service Duplicate Port Numbers + Patch Merge
Service object has a known issue wrt Port and Patch Merge. The issue can be reproduced as the following:
- Create a Service object as the following:
apiVersion: v1
kind: Service
metadata:
name: udp-debug
spec:
type: NodePort
selector:
app: udp-debug
ports:
- name: udp-debug
protocol: UDP
port: 31337
nodePort: 31337
- Update the Service by adding another port using same port number
31337but use a different protocol (the example below addstcp/31337usingkubectl applyor using go-client’s patch merge strategies
apiVersion: v1
kind: Service
metadata:
name: udp-debug
spec:
type: NodePort
selector:
app: udp-debug
ports:
- name: tcp-debug
protocol: TCP
port: 31337
nodePort: 31337
- name: udp-debug
protocol: UDP
port: 31337
nodePort: 31337
The resulting Service object will contain one port (instead of the expected two Service ports)
apiVersion: v1
kind: Service
metadata:
name: udp-debug
spec:
type: NodePort
selector:
app: udp-debug
ports:
- name: tcp-debug
protocol: TCP
port: 31337
nodePort: 31337
The issue is well known and due to the following problem:
Service.Spec.Ports is annotated with `patchMergeKey=Port` and `patchStrategy=merge` ref: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/core/v1/types.go#L4096-L4100. When api-server performs the merge it will fail to identify the two ports `tcp and udp ports sharing the same port number` as two different ports. Note that server-side-apply uses the correct the keys (port and protocol) to perform merges.
The issue is reported here https://github.com/kubernetes/kubernetes/issues/105279 with lengthy commentary here https://github.com/kubernetes/kubernetes/issues/101393 . The api annotation cannot be updated because it will break existing clients as demonstrated by https://github.com/kubernetes/kubernetes/pull/104486
Solution and Work Around
if and only if the required patch on service introduces duplicate port numbers then the user must avoid using apply of go-client merge patches. Instead they should use one of the the following approaches:
kubectl apply --server-sideorkubectl replacecommands.- Use go-client update calls to replace the service (this require a
HTTP PUTwith the entire object specification).
The same should be used if and only if the user is trying to remove a port that has the same port number on existing ports on the same service.
/sig network
Kubernetes version
all versions.
About this issue
- Original URL
- State: open
- Created 3 years ago
- Reactions: 5
- Comments: 21 (18 by maintainers)
anywhere a request context is plumbed, warnings can be added to the response. see calls to
warning.AddWarning(for examples:https://github.com/kubernetes/kubernetes/blob/194a3dea5de35dc819a49ccac4c0c34d35e39c39/staging/src/k8s.io/apiserver/pkg/registry/rest/create.go#L139-L141
That said, I’m pretty sure the data loss is happening client-side (which is why this is ~impossible to fix in strategic merge patch), so the warning might have to be produced client-side as well. Applying the examples from the description, this is what kubectl is sending to the server:
Request 1:
Request 2 (patch request):
Here’s an option, maybe.
What if we added a new tag to this field, like
avoidMerge, which can be used by enlightened clients to say “oh, I shouldn’t use this”. What they do with that information is situational.e.g.
kubectl editcould see that and fall back onPUTinstead ofPATCH.e.g.
kubectl applycould either fall back on PUT or just issue a warning that “this might not do what you want”.Since it is on the
portsfield, any operation that doesn’t touch that field would be unchanged.Existing clients who do not know this new tag just keep doing what they were doing. If it worked for them, awesome, if not, well…it’s not any worse.
I dug into the code a bit. It looks like
applycallsstrategicpatch.CreateThreeWayMergePatch()andeditcallsstrategicpatch.CreateTwoWayMergePatch(). If those functions took an optional "respectMergeAvoidance` flag, the callers could fall back on more primitive methods.There’s a variety of data loss scenarios when using strategic-merge-patch with key-conflicting items, and we want to warn for them all. At least these spring to mind: