kubernetes: Server side apply always conflicts when changing values even when fieldManager is the same

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

I applied this resource with kubectl apply:

This is the state reported by kubectl get deploy -o yaml:

apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "1"
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"hello-world","namespace":"pskubectltest"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"hello-world"}},"template":{"metadata":{"annotations":{"hello":"world"},"labels":{"app":"hello-world"}},"spec":{"containers":[{"image":"strm/helloworld-http@sha256:bd44b0ca80c26b5eba984bf498a9c3bab0eb1c59d30d8df3cb2c073937ee4e45","imagePullPolicy":"IfNotPresent","name":"hello-world","ports":[{"containerPort":80,"protocol":"TCP"}]}]}}}}
    creationTimestamp: 2019-08-02T15:09:10Z
    generation: 1
    managedFields:
    - apiVersion: apps/v1
      fields:
        f:metadata:
          f:annotations:
            f:deployment.kubernetes.io/revision: null
        f:status:
          f:conditions:
            .: null
            k:{"type":"Available"}:
              .: null
              f:type: null
            k:{"type":"Progressing"}:
              .: null
              f:lastTransitionTime: null
              f:status: null
              f:type: null
          f:observedGeneration: null
          f:replicas: null
          f:updatedReplicas: null
      manager: kube-controller-manager
      operation: Update
      time: 2019-08-02T15:09:10Z
    - apiVersion: apps/v1
      fields:
        f:metadata:
          f:annotations:
            .: null
            f:kubectl.kubernetes.io/last-applied-configuration: null
        f:spec:
          f:progressDeadlineSeconds: null
          f:replicas: null
          f:revisionHistoryLimit: null
          f:selector:
            f:matchLabels:
              .: null
              f:app: null
          f:strategy:
            f:rollingUpdate:
              .: null
              f:maxSurge: null
              f:maxUnavailable: null
            f:type: null
          f:template:
            f:metadata:
              f:annotations:
                .: null
                f:hello: null
              f:labels:
                .: null
                f:app: null
            f:spec:
              f:containers:
                k:{"name":"hello-world"}:
                  .: null
                  f:image: null
                  f:imagePullPolicy: null
                  f:name: null
                  f:ports:
                    .: null
                    k:{"containerPort":80,"protocol":"TCP"}:
                      .: null
                      f:containerPort: null
                      f:protocol: null
                  f:resources: null
                  f:terminationMessagePath: null
                  f:terminationMessagePolicy: null
              f:dnsPolicy: null
              f:restartPolicy: null
              f:schedulerName: null
              f:securityContext: null
              f:terminationGracePeriodSeconds: null
      manager: kubectl
      operation: Update
      time: 2019-08-02T15:09:10Z
    - apiVersion: apps/v1
      fields:
        f:status:
          f:availableReplicas: null
          f:conditions:
            k:{"type":"Available"}:
              f:lastTransitionTime: null
              f:lastUpdateTime: null
              f:message: null
              f:reason: null
              f:status: null
            k:{"type":"Progressing"}:
              f:lastUpdateTime: null
              f:message: null
              f:reason: null
          f:readyReplicas: null
      manager: kube-controller-manager
      operation: Update
      time: 2019-08-02T15:09:12Z
    name: hello-world
    namespace: pskubectltest
    resourceVersion: "335334"
    selfLink: /apis/extensions/v1beta1/namespaces/pskubectltest/deployments/hello-world
    uid: f0c923b3-ad50-4c8b-9550-d7e558b42425
  spec:
    progressDeadlineSeconds: 600
    replicas: 2
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        app: hello-world
    strategy:
      rollingUpdate:
        maxSurge: 25%
        maxUnavailable: 25%
      type: RollingUpdate
    template:
      metadata:
        annotations:
          hello: world
        creationTimestamp: null
        labels:
          app: hello-world
      spec:
        containers:
        - image: strm/helloworld-http@sha256:bd44b0ca80c26b5eba984bf498a9c3bab0eb1c59d30d8df3cb2c073937ee4e45
          imagePullPolicy: IfNotPresent
          name: hello-world
          ports:
          - containerPort: 80
            protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        terminationGracePeriodSeconds: 30
  status:
    availableReplicas: 2
    conditions:
    - lastTransitionTime: 2019-08-02T15:09:12Z
      lastUpdateTime: 2019-08-02T15:09:12Z
      message: Deployment has minimum availability.
      reason: MinimumReplicasAvailable
      status: "True"
      type: Available
    - lastTransitionTime: 2019-08-02T15:09:10Z
      lastUpdateTime: 2019-08-02T15:09:12Z
      message: ReplicaSet "hello-world-bf459c845" has successfully progressed.
      reason: NewReplicaSetAvailable
      status: "True"
      type: Progressing
    observedGeneration: 1
    readyReplicas: 2
    replicas: 2
    updatedReplicas: 2
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

As you can see, kubectl was recorded as field manager for metadata.annotations.hello.

Now, I try to update the hello annotation with a server-side apply request with fieldManager=kubectl:

PATCH https://192.168.99.100:8443/apis/apps/v1/namespaces/pskubectltest/deployments/hello-world?fieldManager=kubectl
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
  namespace: pskubectltest
spec:
  selector:
    matchLabels:
      app: hello-world
  replicas: 2
  template:
    metadata:
      annotations:
        hello: changed
      labels:
        app: hello-world
    spec:
      containers:
        - name: hello-world
          image: strm/helloworld-http@sha256:bd44b0ca80c26b5eba984bf498a9c3bab0eb1c59d30d8df3cb2c073937ee4e45
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
              protocol: TCP

This is expected to succeed because kubectl is the field manager of that annotation.

Instead, it fails with Conflict: Apply failed with 1 conflict: conflict with "kubectl" using apps/v1 at 2019-08-02T15:09:10Z: .spec.template.metadata.annotations.hello

A kubectl apply client-side apply works fine to update it, so it is unfortunate that server-side apply cannot.

This same thing seems to happen with any property to be changed.

Anything else we need to know?:

Environment: Client Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.9-gke.7", GitCommit:"b6001a5d99c235723fc19342d347eee4394f2005", GitTreeState:"clean", BuildDate:"2019-06-24T19:37:31Z", GoVersion:"go1.10.8b4", Compiler:"gc", Platform:"darwin/amd64"}

  • Kubernetes version (use kubectl version): Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: minikube
  • OS (e.g: cat /etc/os-release): macOS
  • Kernel (e.g. uname -a):
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others: --feature-gates=ServerSideApply=true

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 40 (30 by maintainers)

Most upvoted comments

We’re missing one last approval on the PR and then we’re good, thanks for the reminder!

Yes absolutely, we’ve done a lot of progress on that, we’re almost ready, thanks!

A lot of things have been done for that, and we’re continuing to work on this. We’re really hoping to get this in 1.19 yes. Thanks!

@julianvmodesto I was talking with @seans3 yesterday, and we were talking about the default manager in kubectl.

We realized that:

  1. We don’t specify a manager for most update operations in kubectl, and rely on the user-agent to be detected as “kubectl”
  2. We don’t have a way to configure this beside for kubectl --server-side apply
  3. The default for server-side apply is also “kubectl”

Wouldn’t it improve the experience here if we built a better manager name for update operations coming from kubectl, e.g. client-side apply (kubectl-client-side-apply), scale (kubectl-scale), annotate (kubectl-annotate), etc … I think it would mostly fix this issue.

It’s partially improved, we might send a PR to improve the error.

@apelisse @jennybuckley do we have a plan to resolve this? It makes sense in retrospect but it’s definitely a sharp edge, like just now I reread and thought there must be some major bug until I kept reading up to past!me’s explanation.

Doc fixes might not be enough, how hard does it sound to add the explanation to the conflict error? Here’s some ideas:

  • Conflict: Apply failed with 1 conflict: conflict with "kubectl" using apps/v1 at 2019-08-02T15:09:10Z: .spec.template.metadata.annotations.hello: prior use of this field was imperative, not via APPLY
  • Conflict: Apply failed with 1 conflict: conflict with "kubectl" (imperative via PUT/PATCH) using apps/v1 at 2019-08-02T15:09:10Z: .spec.template.metadata.annotations.hello
  • Conflict: Apply failed with 1 conflict: conflict with "kubectl" using apps/v1 at 2019-08-02T15:09:10Z: .spec.template.metadata.annotations.hello. (Although the manager matches, the prior set was imperative and the current one is declarative)

Additionally the final fix–probably not for 1.18–of course will include a smooth on-ramp from client side apply but I think we could improve this error message in the mean time?

I haven’t heard back on this. I did try a slack dm as well. I am going to move this in to v1.18 milestone. /milestone v1.18