kubernetes: Server-side apply: migration from client-side apply leaves stuck fields in the object

Migrating from client-side apply to server-side apply (as currently implemented in kubectl and documented here) leaves the objects in a somewhat corrupted state, and result in the resources forever being inconsistent with their expected state. Namely, removed fields that have been set in the initial client-side applied version stay in the resource forever and aren’t dropped.

What happened:

If I apply a manifest using client-side apply and then switch to server-side apply with no changes, the operation succeeds. However, if I then delete a field from the manifest and run server-side apply again, the field stays in the resource instead of being removed

What you expected to happen:

I would obviously expect the field to be removed from the resource.

How to reproduce it (as minimally and precisely as possible):

  1. Create a configmap in the cluster with client-side apply:

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: test
    data:
      key: value
      legacy: unused
    EOF
    
  2. Confirm that the configmap is fine:

    $ kubectl get configmap -o yaml test | egrep -A 3 '^data:'
    data:
      key: value
      legacy: unused
    kind: ConfigMap
    
  3. Apply the same manifest with server-side apply:

    cat <<EOF | kubectl apply --server-side -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: test
    data:
      key: value
      legacy: unused
    EOF
    
  4. Confirm that it wasn’t changed:

    $ kubectl get configmap -o yaml test | egrep -A 3 '^data:'
    data:
      key: value
      legacy: unused
    kind: ConfigMap
    
  5. Remove one of the values from the configmap and apply again using server-side apply:

    cat <<EOF | kubectl apply --server-side -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: test
    data:
      key: value
    EOF
    
  6. Check the configmap, expecting the legacy value to be removed:

    $ kubectl get configmap -o yaml test | egrep -A 3 '^data:'
    data:
      key: value
      legacy: unused
    kind: ConfigMap
    
  7. As you can see, the legacy value is still there.

Anything else we need to know?:

This happens because the client-side apply results are recorded in managedFields with an Update operation by the kubectl-client-side-apply field manager. When the resource is migrated to server-side apply, the results are instead tracked as Apply by the kubectl field manager. If there are no conflicts between the two fields (and naturally you wouldn’t expect them to be there, since you’re trying to convert the same resource to SSA), both field managers will own the fields and the client-side apply won’t get kicked out. If you then remove a field from the list of fields managed by SSA, the field will still be managed by the old Update from kubectl-client-side-apply and wouldn’t be removed as a result.

Environment:

  • Kubernetes version (use kubectl version):

    Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T21:51:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
    Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
    

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Reactions: 11
  • Comments: 37 (18 by maintainers)

Commits related to this issue

Most upvoted comments

Julian and I talked about this and we’ll see how we can address it specifically for kubectl.

Again, I don’t expect this to be a very frequent process or that it’ll be massively used by people (especially outside kubectl).

@apelisse @julianvmodesto We are also trying to figure out how to migrate from a typical “read-modify-update/patch”-logic to SSA in our controllers. Because of the problem described in this issue, controllers won’t be able to remove optional fields from resources anymore after migrating to SSA.

This applies to both of the following cases:

  • if a resource was created before SSA and does not track ownership in managedFields yet, applying for the first time will add the before-first-apply field manager owning all fields currently present in the object. When the controller applies the resource again and removes an optional field from its apply config, it will only give up ownership, but the field will not be removed as it’s still owned by the before-first-apply manager.
  • if a resource was created after SSA and does already track ownership in managedFields, it’s basically the same like above, but now the existing fields are owned by the same field manager but for the Update operation. Thus, optional fields will also not be removed in this case. This is pretty much what @aermakov-zalando also described in the steps to reproduce above, but only via a controller instead of kubectl.

I think it’s a valid and common use case in controllers to use SSA like described here (always apply resources without thinking about whether or not certain optional fields are removed), because it makes coding much simpler. Though because of this issue, controllers will not be able to migrate to SSA for any resources that they created before SSA. It would be great to see support for this. Let me know if I can help you in any way to get there!

@julianvmodesto @apelisse

I saw this PR linked that got closed, was it not a good solution to fix this problem? https://github.com/kubernetes/kubernetes/pull/99277

I’m interested because of my previous issue that I opened (https://github.com/kubernetes/kubernetes/issues/107828) which seems to be related to this issue in addition to this other issue: https://github.com/kubernetes/kubernetes/issues/107417

Also, to note from my current use case, I am using a different field manager besides kubectl, but provided examples related to kubectl as it is the easiest reproducer rather than some Python code.

I seem to be having issues removing fields due to the before-first-apply field manager taking over all the fields of an object (not sure if there are plans to remove this field manager in the future). I am currently working around these conflict issues by manually taking the result of the before-first-apply manager (as returned by a server-side dry-run) and renaming that field manager entry to be for my own manager. But it would be great if this wasn’t necessary at all and could be fixed, as this isn’t atomic and can be prone to conflicts.

@julianvmodesto That might be an option for the client (kubectl) too, just remove the “kubectl-client-side-applied” manager when server-side apply is ran.