kubernetes: `kubectl apply` (client-side) removes all entries when attempting to remove a single duplicated entry in a persisted object
Is this a BUG REPORT or FEATURE REQUEST?: /kind bug
Lists in API objects can define a named property that should act as a “merge key”. The value of that property is expected to be unique for each item in the list. However, gaps in API validation allow some types to be persisted with multiple items in the list sharing the same value for a mergeKey property.
The algorithm used by kubectl apply
detects removals from a list based on the specified key, and communicates that removal to the server using a delete directive, specifying only the key. When duplicate items exist, that deletion directive is ambiguous, and the server implementation deletes all items with that key.
Known API types/fields which define a mergeKey but allow duplicate items to be persisted:
PodSpec (affects all workload objects containing a pod template):
hostAliases
(#91670)imagePullSecrets
(https://github.com/kubernetes/kubernetes/issues/91629)containers[*].env
(this issue, https://github.com/kubernetes/kubernetes/issues/86163, https://github.com/kubernetes/kubernetes/issues/93266, https://github.com/kubernetes/kubernetes/issues/106809, https://github.com/kubernetes/kubernetes/issues/121541, https://github.com/kubernetes/kubernetes/issues/122121)containers[*].ports
(#86273, https://github.com/kubernetes/kubernetes/issues/93952, https://github.com/kubernetes/kubernetes/issues/113246)volumes
(https://github.com/kubernetes/kubernetes/issues/78266)(https://github.com/kubernetes/kubernetes/pull/35071 changed the merge key from name to mountPath, which was a breaking change, but mountPath is at least required to be unique)containers[*].volumeMounts
Service
ports
(name+protocol required to be unique on create in https://github.com/kubernetes/kubernetes/pull/47336, but still has issues on update in https://github.com/kubernetes/kubernetes/issues/59119, #97883, and mergeKey is still only name, xref https://github.com/kubernetes/kubernetes/issues/47249)
Original report
===
What happened:
For deployment
resource:
A container has defined environment variable with name x
that is duplicated (there are two env vars with the same name, the value is also the same).
When you fix the deployment
resource descriptor so that environment variable with name x
appears only once and push it with kubectl apply
, deployment with no environment variable named x
is created and therefore no environment variable named x
is passed to replica set and pods.
What you expected to happen:
After fixing the deployment
, environment variable with name x
is defined in the deployment
once .
How to reproduce it (as minimally and precisely as possible):
- create deployment with container with duplicated environment variable
kubectl apply
it- fix deployment removing one of duplicated environment variable definitions
kubectl apply
itkubectl get deployment/your-deployment -o yaml
prints deployment without
Anything else we need to know?: nope
Environment:
- Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T20:00:41Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:40:06Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration: private Kubernetes cluster
- OS (e.g. from /etc/os-release): N/A
- Kernel (e.g.
uname -a
): N/A - Install tools: N/A
- Others: N/A
About this issue
- Original URL
- State: open
- Created 6 years ago
- Reactions: 33
- Comments: 31 (8 by maintainers)
@kiyutink for me there was.
Thanks @stevelacy . Everyone is experiencing this in production.
same issue with 1.20.6
I found the problem, after I remove the duplicateEnvVars, It works
Use edit, patch, or replace, instead of apply.
Change the targetPort: https to http
For those interested, I “fixed” this in one of my files by removing all of the instances of the env var that was duplicated, applied that to the cluster, then added it back it (only one time this time!) and applied that. Not the end solution for sure, but it works for those who want to clean their files up.