argo-cd: ServerSideApply fails with "conversion failed"
Checklist:
- I’ve searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
- I’ve included steps to reproduce the bug.
- I’ve pasted the output of
argocd version
.
Describe the bug Using ServerSideApply, configured in an Application via Sync Options, fails with
error calculating structured merge diff: error calculating diff: error while running updater.Apply: converting (v1.CronJob) to (v1beta1.CronJob): unknown conversion
Using it only with the “Sync” button, without having it configured for the app, works, though.
To Reproduce
- Have a CronJob with apiVersion
batch/v1
or HPA with apiVersionautoscaling/v2beta2
synced without SSA - activate ServerSideApply in App details
- => most likely fails instantly
- if not, try to sync manually with “Server-Side apply” option
Expected behavior ServerSideApply should work in both cases (app config + manual sync)
Screenshots Application configuration which breaks:
Using it only with the Sync button works:
Version
argocd: v2.5.0+b895da4
BuildDate: 2022-10-25T14:40:01Z
GitCommit: b895da457791d56f01522796a8c3cd0f583d5d91
GitTreeState: clean
GoVersion: go1.18.7
Compiler: gc
Platform: linux/amd64
argocd-server: v2.5.0+b895da4
BuildDate: 2022-10-25T14:40:01Z
GitCommit: b895da457791d56f01522796a8c3cd0f583d5d91
GitTreeState: clean
GoVersion: go1.18.7
Compiler: gc
Platform: linux/amd64
Kustomize Version: v4.5.7 2022-08-02T16:35:54Z
Helm Version: v3.10.1+g9f88ccb
Kubectl Version: v0.24.2
Jsonnet Version: v0.18.0
About this issue
- Original URL
- State: open
- Created 2 years ago
- Reactions: 9
- Comments: 23 (9 by maintainers)
We’re seeing the same issue with ClusterRole, ClusterRoleBinding.
We are seeing this in 2.8 with HPA, clusterrole, clusterrolebinding and roles, on clusters that have all been properly upgraded and resource manifests updated but the clusters were created back when these beta api versions were k8s and are now removed.
We run into similar issues when enabling SSA for our apps. However, the issue isn’t consistent between clusters/apps (the same app/resource might work on one but not the other).
@leoluz I believe
managedFields
are to blame. They include anapiVersion
field that might reference an older (beta) version.Managed fields of an affected `Ingress` resource:
Managed fields of corresponding resource (same name / namespace) but on a different cluster (just different cluster / app age):
It also explains why recreating works - it clears the
managedFields
.Sadly, it does not help me yet to resolve this issue without recreating the resources (I haven’t found a way to clear/edit the managedFields).
Using 2.5.1 version and having similar issues.
error calculating structured merge diff: error calculating diff: error while running updater.Apply: converting (v1beta1.PodDisruptionBudget) to (v1.PodDisruptionBudget): unknown conversion
anderror calculating structured merge diff: error calculating diff: error while running updater.Apply: converting (v2beta2.HorizontalPodAutoscaler) to (v1.HorizontalPodAutoscaler): unknown conversion
Same behavior with
2.5.2
:ComparisonError: error calculating structured merge diff: error calculating diff: error while running updater.Apply: converting (v1.Ingress) to (v1beta1.Ingress): unknown conversion
adding Ingress in case someone hits the issue with that resource.
Thanks for the additional info. That actually makes sense. What is strange to me is that from your error message it seems that Argo CD is trying to convert from
v1.CronJob
tov1beta1.CronJob
. Not sure why it is trying to go with an older version. That would only make sense if you are applying a CronJob with v1beta1.I’ll try to reproduce this error locally anyways.
We noticed a very strange behavior here. We saved the affected CronJob manifest locally, deleted it on Kubernetes and re-created it again. (so it’s the exact same manifest, just re-created) After that, Argo was able to sync the application. One thing is that those CronJobs were created with an older api version in the past, but we upgraded them to
batch/v1
long ago and also in Kubernetes it shows asbatch/v1
. Don’t know why re-creation helps in that case.