argo-cd: ServerSideApply fails with "conversion failed"

Checklist:

  • I’ve searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
  • I’ve included steps to reproduce the bug.
  • I’ve pasted the output of argocd version.

Describe the bug Using ServerSideApply, configured in an Application via Sync Options, fails with

error calculating structured merge diff: error calculating diff: error while running updater.Apply: converting (v1.CronJob) to (v1beta1.CronJob): unknown conversion

Using it only with the “Sync” button, without having it configured for the app, works, though.

To Reproduce

  • Have a CronJob with apiVersion batch/v1 or HPA with apiVersion autoscaling/v2beta2 synced without SSA
  • activate ServerSideApply in App details
  • => most likely fails instantly
  • if not, try to sync manually with “Server-Side apply” option

Expected behavior ServerSideApply should work in both cases (app config + manual sync)

Screenshots Application configuration which breaks: Bildschirmfoto 2022-11-01 um 13 49 03

Using it only with the Sync button works: Bildschirmfoto 2022-11-01 um 13 50 44

Version

argocd: v2.5.0+b895da4
  BuildDate: 2022-10-25T14:40:01Z
  GitCommit: b895da457791d56f01522796a8c3cd0f583d5d91
  GitTreeState: clean
  GoVersion: go1.18.7
  Compiler: gc
  Platform: linux/amd64
argocd-server: v2.5.0+b895da4
  BuildDate: 2022-10-25T14:40:01Z
  GitCommit: b895da457791d56f01522796a8c3cd0f583d5d91
  GitTreeState: clean
  GoVersion: go1.18.7
  Compiler: gc
  Platform: linux/amd64
  Kustomize Version: v4.5.7 2022-08-02T16:35:54Z
  Helm Version: v3.10.1+g9f88ccb
  Kubectl Version: v0.24.2
  Jsonnet Version: v0.18.0

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 9
  • Comments: 23 (9 by maintainers)

Most upvoted comments

We’re seeing the same issue with ClusterRole, ClusterRoleBinding.

We are seeing this in 2.8 with HPA, clusterrole, clusterrolebinding and roles, on clusters that have all been properly upgraded and resource manifests updated but the clusters were created back when these beta api versions were k8s and are now removed.

We run into similar issues when enabling SSA for our apps. However, the issue isn’t consistent between clusters/apps (the same app/resource might work on one but not the other).

What is strange to me is that from your error message it seems that Argo CD is trying to convert from v1.CronJob to v1beta1.CronJob. Not sure why it is trying to go with an older version. That would only make sense if you are applying a CronJob with v1beta1.

@leoluz I believe managedFields are to blame. They include an apiVersion field that might reference an older (beta) version.

Managed fields of an affected `Ingress` resource:
metadata:
  managedFields:
    - apiVersion: networking.k8s.io/v1beta1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:alb.ingress.kubernetes.io/actions.ssl-redirect: {}
            f:alb.ingress.kubernetes.io/certificate-arn: {}
            f:alb.ingress.kubernetes.io/listen-ports: {}
            f:alb.ingress.kubernetes.io/scheme: {}
            f:alb.ingress.kubernetes.io/ssl-policy: {}
            f:alb.ingress.kubernetes.io/target-type: {}
          f:labels:
            .: {}
            f:app.kubernetes.io/instance: {}
      manager: kubectl
      operation: Update
      time: "2021-05-28T16:20:40Z"
    - apiVersion: networking.k8s.io/v1beta1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers: {}
      manager: controller
      operation: Update
      time: "2021-08-02T09:10:54Z"
    - apiVersion: networking.k8s.io/v1beta1
      fieldsType: FieldsV1
      fieldsV1:
        f:spec:
          f:ingressClassName: {}
      manager: argocd-application-controller
      operation: Update
      time: "2021-08-02T09:18:03Z"
    - apiVersion: networking.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers:
            v:"group.ingress.k8s.aws/argo-ingresses": {}
        f:status:
          f:loadBalancer:
            f:ingress: {}
      manager: controller
      operation: Update
      time: "2022-03-21T15:25:24Z"
    - apiVersion: networking.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            f:alb.ingress.kubernetes.io/group.name: {}
            f:alb.ingress.kubernetes.io/load-balancer-attributes: {}
            f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:spec:
          f:rules: {}
      manager: argocd-application-controller
      operation: Update
      time: "2022-08-15T11:22:05Z"
    name: argocd
    namespace: argocd
    resourceVersion: "206036857"
    uid: 3df56465-962b-42bb-9075-e61740b636cc
Managed fields of corresponding resource (same name / namespace) but on a different cluster (just different cluster / app age):
metadata:
  managedFields:
    - apiVersion: networking.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:alb.ingress.kubernetes.io/actions.ssl-redirect: {}
            f:alb.ingress.kubernetes.io/certificate-arn: {}
            f:alb.ingress.kubernetes.io/group.name: {}
            f:alb.ingress.kubernetes.io/listen-ports: {}
            f:alb.ingress.kubernetes.io/scheme: {}
            f:alb.ingress.kubernetes.io/ssl-policy: {}
            f:alb.ingress.kubernetes.io/target-type: {}
          f:labels:
            .: {}
            f:app.kubernetes.io/instance: {}
        f:spec:
          f:ingressClassName: {}
          f:rules: {}
      manager: kubectl-client-side-apply
      operation: Update
      time: "2022-05-05T15:11:18Z"
    - apiVersion: networking.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers:
            .: {}
            v:"group.ingress.k8s.aws/argo-ingresses": {}
        f:status:
          f:loadBalancer:
            f:ingress: {}
      manager: controller
      operation: Update
      time: "2022-05-05T15:11:20Z"
    - apiVersion: networking.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            f:alb.ingress.kubernetes.io/load-balancer-attributes: {}
            f:kubectl.kubernetes.io/last-applied-configuration: {}
      manager: argocd-application-controller
      operation: Update
      time: "2022-08-15T11:21:51Z"

It also explains why recreating works - it clears the managedFields.

Sadly, it does not help me yet to resolve this issue without recreating the resources (I haven’t found a way to clear/edit the managedFields).

Using 2.5.1 version and having similar issues. error calculating structured merge diff: error calculating diff: error while running updater.Apply: converting (v1beta1.PodDisruptionBudget) to (v1.PodDisruptionBudget): unknown conversion and error calculating structured merge diff: error calculating diff: error while running updater.Apply: converting (v2beta2.HorizontalPodAutoscaler) to (v1.HorizontalPodAutoscaler): unknown conversion

Same behavior with 2.5.2: ComparisonError: error calculating structured merge diff: error calculating diff: error while running updater.Apply: converting (v1.Ingress) to (v1beta1.Ingress): unknown conversion

adding Ingress in case someone hits the issue with that resource.

We noticed a very strange behavior here. We saved the affected CronJob manifest locally, deleted it on Kubernetes and re-created it again. (so it’s the exact same manifest, just re-created) After that, Argo was able to sync the application.

Thanks for the additional info. That actually makes sense. What is strange to me is that from your error message it seems that Argo CD is trying to convert from v1.CronJob to v1beta1.CronJob. Not sure why it is trying to go with an older version. That would only make sense if you are applying a CronJob with v1beta1.

I’ll try to reproduce this error locally anyways.

We noticed a very strange behavior here. We saved the affected CronJob manifest locally, deleted it on Kubernetes and re-created it again. (so it’s the exact same manifest, just re-created) After that, Argo was able to sync the application. One thing is that those CronJobs were created with an older api version in the past, but we upgraded them to batch/v1 long ago and also in Kubernetes it shows as batch/v1. Don’t know why re-creation helps in that case.