kubernetes: Inconsistent use of "configured" vs. "unchanged" when applying configuration files
Is this a BUG REPORT or FEATURE REQUEST?: bug report
/kind bug
What happened: When deploying an unchanged resource to Kubernetes using kubectl apply, output sometimes states [resource kind] "example" configured.
What you expected to happen: For the output to state [resource kind] "example" unchanged instead.
How to reproduce it (as minimally and precisely as possible): I attempted to search the codebase to determine how Kubenetes or kubectl decided whether to output “configured” vs. “unchanged” when applying a configuration file, but was unsuccessful. With a better understanding of how that determination occurs, I could most likely provide more examples. There is one example now in the next paragraph.
Anything else we need to know?: There appears to be a number of inconsistencies with how Kubernetes determines whether to output that a resource has been “configured” or is “unchanged.” I’ll do my best to explain all that I could find.
When applying Deployments locally to Minikube, if the configuration file is created right before applying it using kubectl, the Deployment is always labeled as “configured,” even if the contents are identical to the previously applied file. This behavior does not occur with Services or Ingresses, though, both of which can have their configuration files created just before applying and the output will correctly state “configured” or “unchanged” based on whether the configuration has actually changed or not. We thought maybe for Deployments, the determination was based on the last modified time of the file, so we forced a static last modified time before running kubectl apply, but that did not fix the issue. Therefore, we don’t believe that it is making the determination based on the last modified time.
However, when applying Ingresses to GKE, they become affected by this issue, even though they weren’t affected when applying to Minikube. Services, on the otherhand, still aren’t affected by this issue using GKE.
In addition, some resource kinds seem to always be considered “configured,” even when you apply the same file back-to-back. For example, if test.yml contains:
apiVersion: v1
kind: Namespace
metadata:
name: "test"
…and you run kubectl apply -f test.yml multiple times, it will always state namespace "test" configured even though the configuration file isn’t changing in any way. This seems to be true of CustomResourceDefinitions and StorageClasses too (probably others as well).
In general, what I’d like to see is consistent use of “configured” vs. “unchanged” when applying a resource via kubectl apply. This functionality appears to work now for some kinds of resources, but not all kinds. You might be wondering, why is this even important? When applying a large group of resource files, it’s very helpful to know which ones have actually changed and which ones haven’t. It would become immediately obvious when a resource changes when it shouldn’t have, and you could investigate the issue right away, without waiting for that problem to manifest itself in other ways. As it stands now, so many are labeled as “configured” that it’s not useful or accurate to rely on this statement.
Environment:
- Kubernetes version (use
kubectl version):v1.10.0(Minikube)v1.10.4-gke.2(GKE) - Cloud provider or hardware configuration: GKE
- OS (e.g. from /etc/os-release): Locally, macOS 10.13.5
- Kernel (e.g.
uname -a): - Install tools:
- Others:
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 31
- Comments: 53 (17 by maintainers)
This is also the case for
StatefulSetresources that definevolumeClaimTemplates. Even though the resource is unchanged,kubectlwill still try to patch the claim template. Can be reproduced with the following minimal yaml:Applying for the first time:
and on any subsequent
kubectl apply:Increasing verbosity shows that
kubectlattempts to submit the following patch:{"spec":{"volumeClaimTemplates":[{"metadata":{"name":"foo"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}}}}]}}Statefulsets without a
volumeClaimTemplatesobviously do not have this issue. I tried every combination of fields and formats for thevolumeClaimTemplatesI can imagine, but the result is always the same. Tested on minikube running versionv1.14.0and on GKE version1.12.5-gke.5.kubectlversion used isv1.14.1.This issue makes GitOps tools like Flux ( https://github.com/fluxcd ) and ArgoCD ( https://github.com/argoproj ) never finish their sync of resources. Or in other words - GitOps tools try to synchronize the disired state of the repository with the actual state of the cluster resources until
kubectl applywill reportunchanged. However for some resources - especially custom resource definitions - kubectl will always reportconfigured. Example:Please - this need to be fixed.
😄 if
diffreturns nothing, then nothing happened 😉I don’t think this is resolved
Thanks
/remove-lifecycle stale
My own research, applying cert-manager directly:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yamlThat will cause a few resources to re-configure even though there are no changes.
I used @kautsig 's technique and found the culprits. The manifest above has a
annotations:map with nothing in it. It also hasapiGroup: ""in other resources. These I imagine get dropped when applied to the server.kubectl get <resource> -o yamldoes not show them.But if I run
kubectl apply --server-dry-runor actually apply, it showsconfigured. Oddly enough, if I runkubectl diffit does not show any diffs, which means that apply/diff behave differently. Directly negating this: https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/ (last section).I’m not sure who is at fault here. Cert manager for using these no-op values? Kubernetes server for dropping them? Or the diff/apply logic not accounting for these being false positives?
FWIW: one of the empty fields was inside a deployment template, but the other was for a
RoleBinding.subjects.apiGroupfield, so nothing inside a template.For those suffering from this issue, here is how to debug it:
kubectl diff -f deployment.yaml. If this gives you an empty result, I would expect a “unchanged” message on apply.kubectl apply -f deployment.yaml -v=8PATCHrequest. The line above it should contain theRequest Body:The issues we saw boil down to:
azureFilevolume withreadOnly: falseWe could change our yamls to not report change, except for the volumeClaimTemplates issue.
I found another case where
kubectl applywill always reportconfigured, even if the definition is unchanged. Try this file:Running in verbose mode reveals that every time
kubectltries to apply the following patch:This can be worked around by explicitly adding
type: RollingUpdateto thestrategymap, but I would still consider it a bug.(This is with kubectl v1.14.0, server v1.13.4.)
unfortunately my patch was already considered and rejected because it could be susceptible to a race condition (showing you
configuredif another applier had issued a patch at the same time as you).apologies to get hopes up.
to fix without such a race condition would require a patch on the server-side.
@apelisse now it sounds like a famous philosophical experiment: https://en.wikipedia.org/wiki/If_a_tree_falls_in_a_forest
“If diff decided to send a PATCH but nobody seen that patch change anything - did it happen or not”
I also notice the unit you use in the
resourcessection matters.e.g.
This is fine (apply again will result in
unchanged)This will show
configuredevery time you apply (when using decimal to define cpu)This is also problematic (when using “1000m” for cpu instead of “1”)
And this (when using integer 1 for cpu instead of string “1”)
My feeling is, the units for these are always normalised according to certain rules when applying, but the
last-applied-configurationannotation always store the original yaml as is, causing the mismatch when comparing.I faced same issue after added namespace. Still no update/fix until now?
Using Kubernetes version 1.19.3 here and I’m also experiencing the “configured” issue.
Got some resource requests and limits in my yml mentioned by others by I have them in other Deployments as well but for those yml files get “unchanged”.
One obvious difference is the
env:config for the pod spec in my Deployment for the particular file giving me the “configured” problem. Don’t know if it could be related?/remove-lifecycle stale