kubernetes: Inconsistent use of "configured" vs. "unchanged" when applying configuration files

Is this a BUG REPORT or FEATURE REQUEST?: bug report

/kind bug

What happened: When deploying an unchanged resource to Kubernetes using kubectl apply, output sometimes states [resource kind] "example" configured.

What you expected to happen: For the output to state [resource kind] "example" unchanged instead.

How to reproduce it (as minimally and precisely as possible): I attempted to search the codebase to determine how Kubenetes or kubectl decided whether to output “configured” vs. “unchanged” when applying a configuration file, but was unsuccessful. With a better understanding of how that determination occurs, I could most likely provide more examples. There is one example now in the next paragraph.

Anything else we need to know?: There appears to be a number of inconsistencies with how Kubernetes determines whether to output that a resource has been “configured” or is “unchanged.” I’ll do my best to explain all that I could find.

When applying Deployments locally to Minikube, if the configuration file is created right before applying it using kubectl, the Deployment is always labeled as “configured,” even if the contents are identical to the previously applied file. This behavior does not occur with Services or Ingresses, though, both of which can have their configuration files created just before applying and the output will correctly state “configured” or “unchanged” based on whether the configuration has actually changed or not. We thought maybe for Deployments, the determination was based on the last modified time of the file, so we forced a static last modified time before running kubectl apply, but that did not fix the issue. Therefore, we don’t believe that it is making the determination based on the last modified time.

However, when applying Ingresses to GKE, they become affected by this issue, even though they weren’t affected when applying to Minikube. Services, on the otherhand, still aren’t affected by this issue using GKE.

In addition, some resource kinds seem to always be considered “configured,” even when you apply the same file back-to-back. For example, if test.yml contains:

apiVersion: v1
kind: Namespace
metadata:
  name: "test"

…and you run kubectl apply -f test.yml multiple times, it will always state namespace "test" configured even though the configuration file isn’t changing in any way. This seems to be true of CustomResourceDefinitions and StorageClasses too (probably others as well).

In general, what I’d like to see is consistent use of “configured” vs. “unchanged” when applying a resource via kubectl apply. This functionality appears to work now for some kinds of resources, but not all kinds. You might be wondering, why is this even important? When applying a large group of resource files, it’s very helpful to know which ones have actually changed and which ones haven’t. It would become immediately obvious when a resource changes when it shouldn’t have, and you could investigate the issue right away, without waiting for that problem to manifest itself in other ways. As it stands now, so many are labeled as “configured” that it’s not useful or accurate to rely on this statement.

Environment:

  • Kubernetes version (use kubectl version): v1.10.0 (Minikube) v1.10.4-gke.2 (GKE)
  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release): Locally, macOS 10.13.5
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 31
  • Comments: 53 (17 by maintainers)

Most upvoted comments

This is also the case for StatefulSet resources that define volumeClaimTemplates. Even though the resource is unchanged, kubectl will still try to patch the claim template. Can be reproduced with the following minimal yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: foo
spec:
  selector:
    matchLabels:
      app: foo
  serviceName: foo
  template:
    metadata:
      labels:
        app: foo
    spec:
      containers:
      - image: busybox
        name: foo
  volumeClaimTemplates:
  - metadata:
      name: foo
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

---
apiVersion: v1
kind: Service
metadata:
  name: foo
spec:
  ports:
  - name: foo
    port: 1234
  selector:
    app: foo

Applying for the first time:

$ kubectl apply -f foo.yaml 
statefulset.apps/foo created
service/foo created

and on any subsequent kubectl apply:

$ kubectl apply -f foo.yaml 
statefulset.apps/foo configured
service/foo unchanged

Increasing verbosity shows that kubectl attempts to submit the following patch: {"spec":{"volumeClaimTemplates":[{"metadata":{"name":"foo"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}}}}]}}

Statefulsets without a volumeClaimTemplates obviously do not have this issue. I tried every combination of fields and formats for the volumeClaimTemplates I can imagine, but the result is always the same. Tested on minikube running version v1.14.0 and on GKE version 1.12.5-gke.5. kubectl version used is v1.14.1.

This issue makes GitOps tools like Flux ( https://github.com/fluxcd ) and ArgoCD ( https://github.com/argoproj ) never finish their sync of resources. Or in other words - GitOps tools try to synchronize the disired state of the repository with the actual state of the cluster resources until kubectl apply will report unchanged. However for some resources - especially custom resource definitions - kubectl will always report configured. Example:

$ kubectl apply -f https://raw.githubusercontent.com/traefik/traefik-helm-chart/master/traefik/crds/ingressroute.yaml
customresourcedefinition.apiextensions.k8s.io/ingressroutes.traefik.containo.us created
$ kubectl apply -f https://raw.githubusercontent.com/traefik/traefik-helm-chart/master/traefik/crds/ingressroute.yaml
customresourcedefinition.apiextensions.k8s.io/ingressroutes.traefik.containo.us configured
$ kubectl apply -f https://raw.githubusercontent.com/traefik/traefik-helm-chart/master/traefik/crds/ingressroute.yaml
customresourcedefinition.apiextensions.k8s.io/ingressroutes.traefik.containo.us configured
...

Please - this need to be fixed.

😄 if diff returns nothing, then nothing happened 😉

I don’t think this is resolved

Thanks

/remove-lifecycle stale

My own research, applying cert-manager directly:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml

That will cause a few resources to re-configure even though there are no changes.

I used @kautsig 's technique and found the culprits. The manifest above has a annotations: map with nothing in it. It also has apiGroup: "" in other resources. These I imagine get dropped when applied to the server. kubectl get <resource> -o yaml does not show them.

But if I run kubectl apply --server-dry-run or actually apply, it shows configured. Oddly enough, if I run kubectl diff it does not show any diffs, which means that apply/diff behave differently. Directly negating this: https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/ (last section).

I’m not sure who is at fault here. Cert manager for using these no-op values? Kubernetes server for dropping them? Or the diff/apply logic not accounting for these being false positives?

FWIW: one of the empty fields was inside a deployment template, but the other was for a RoleBinding.subjects.apiGroup field, so nothing inside a template.

For those suffering from this issue, here is how to debug it:

  1. Check if there is a difference between your resource and the applied one kubectl diff -f deployment.yaml. If this gives you an empty result, I would expect a “unchanged” message on apply.
  2. Run kubectl apply -f deployment.yaml -v=8
  3. Look for the PATCH request. The line above it should contain the Request Body:
  4. Analyze the patch and try to change your yaml so the patch is not requested. Go to 2.

The issues we saw boil down to:

  • Resource request/limits as mentioned above
  • StatefulSets with volumeClaimTemplates as mentioned above
  • azureFile volume with readOnly: false

We could change our yamls to not report change, except for the volumeClaimTemplates issue.

I found another case where kubectl apply will always report configured, even if the definition is unchanged. Try this file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
  labels:
    app: test
spec:
  selector:
    matchLabels:
      app: test
  strategy:
    rollingUpdate: {}
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - image: busybox
        name: busybox
        args: ['sleep', '99999']

Running in verbose mode reveals that every time kubectl tries to apply the following patch:

{"spec":{"strategy":{"$retainKeys":["rollingUpdate"]}}}

This can be worked around by explicitly adding type: RollingUpdate to the strategy map, but I would still consider it a bug.

(This is with kubectl v1.14.0, server v1.13.4.)

unfortunately my patch was already considered and rejected because it could be susceptible to a race condition (showing you configured if another applier had issued a patch at the same time as you).

apologies to get hopes up.

to fix without such a race condition would require a patch on the server-side.

@apelisse now it sounds like a famous philosophical experiment: https://en.wikipedia.org/wiki/If_a_tree_falls_in_a_forest

“If diff decided to send a PATCH but nobody seen that patch change anything - did it happen or not”

I also notice the unit you use in the resources section matters.

e.g.

This is fine (apply again will result in unchanged)

resources:
  requests:
    cpu: "500m"
    memory: "32Mi"
  limits:
    cpu: "1"
    memory: "128Mi"

This will show configured every time you apply (when using decimal to define cpu)

resources:
  requests:
    cpu: "0.5"
    memory: "32Mi"
  limits:
    cpu: "1"
    memory: "128Mi"

This is also problematic (when using “1000m” for cpu instead of “1”)

resources:
  requests:
    cpu: "500m"
    memory: "32Mi"
  limits:
    cpu: "1000m"
    memory: "128Mi"

And this (when using integer 1 for cpu instead of string “1”)

resources:
  requests:
    cpu: "500m"
    memory: "32Mi"
  limits:
    cpu: 1
    memory: "128Mi"

My feeling is, the units for these are always normalised according to certain rules when applying, but the last-applied-configuration annotation always store the original yaml as is, causing the mismatch when comparing.

I faced same issue after added namespace. Still no update/fix until now?

Using Kubernetes version 1.19.3 here and I’m also experiencing the “configured” issue.

Got some resource requests and limits in my yml mentioned by others by I have them in other Deployments as well but for those yml files get “unchanged”.

One obvious difference is the env: config for the pod spec in my Deployment for the particular file giving me the “configured” problem. Don’t know if it could be related?

/remove-lifecycle stale