argo-cd: Issue with syncing of app due to conditional using semverCompare seemingly not being evaluated
If you are trying to resolve an environment-specific issue or have a one-off question about the edge case that does not require a feature then please consider asking a question in argocd slack channel.
Checklist:
- I’ve searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
- I’ve included steps to reproduce the bug.
- I’ve pasted the output of
argocd version
.
Describe the bug This bug/question was first raised in the #argo-cd Slack channel.
When trying to sync an application to a namespace, then the deployment resource fails with the following error message:
error validating data: ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[1]): unknown field "subPathExpr" in io.k8s.api.core.v1.VolumeMount
Here is an excerpt from the deployment.yaml
template we’re using:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
replicas: {{ .Values.replicaCount }}
containers:
- name: {{ .Chart.Name }}
volumeMounts:
- name: log
mountPath: /app/logs
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.Version }}
subPathExpr: {{ .Release.Name }}/$(APP)
{{- else }}
subPath: {{ .Release.Name }}/$(APP)
{{- end }}
It looks like the conditional above isn’t evaluated. Hence, subPathExpr
is used , which causes the application sync to fail. The failure is correct, as the version of k8s in the target cluster is v1.13.9+icp-ee
, i.e. subPathExpr
is not present in that version
Note! When I created the app, I used the ArgoCD CLI and specified to use helm v3, like so:
argocd app create myApp --project myProject --repo https://somewhere/myApp.git --path . --dest-namespace myNamespace --dest-server myCluster --revision HEAD --helm-version v3
To Reproduce
- Add a
volumeMounts
section like the one above, including thesemverCompare
conditional, to a deployment template - Create a new app in ArgoCD, set the Helm version to v3 by using the
--helm-version v3
option - Try to sync the app to a namespace in a cluster with a k8s version < 1.14-0
I’ve tried to execute the exact same helm template
command, which is being executed by Argo as a part of step 3 above, locally (see the Logs section below). When I do that, the template is able to render correctly, without any error message.
I haven’t tried adding a similar conditional to other sections of the deployment YAML, but I suspect the behaviour would be the same, i.e. that it isn’t evaluated
Versions of helm and kubectl used locally are as follows.
Helm
version.BuildInfo{Version:"v3.5.3", GitCommit:"041ce5a2c17a58be0fcd5f5e16fb3e7e95fea622", GitTreeState:"dirty", GoVersion:"go1.16"}
kubectl
Client Version: v1.20.4-dirty
Server Version: v1.13.9+icp-ee
Expected behavior
The template should be rendered without any error message, and the app be synced to the namespace.
Screenshots
Version
argocd: v1.8.7+eb3d1fb.dirty
BuildDate: 2021-03-07T19:33:57Z
GitCommit: eb3d1fb84b9b77cdffd70b14c4f949f1c64a9416
GitTreeState: dirty
GoVersion: go1.16
Compiler: gc
Platform: darwin/amd64
argocd-server: v1.8.1+c2547dc
BuildDate: 2020-12-10T02:59:21Z
GitCommit: c2547dca95437fdbb4d1e984b0592e6b9110d37f
GitTreeState: clean
GoVersion: go1.14.12
Compiler: gc
Platform: linux/amd64
Ksonnet Version: v0.13.1
Kustomize Version: v3.8.1 2020-07-16T00:58:46Z
Helm Version: v3.4.1+gc4e7485
Kubectl Version: v1.17.8
Jsonnet Version: v0.17.0
Logs
argocd-repo-server logs
time="2021-03-21T14:25:31Z" level=info msg=Trace args="[helm template . --name-template someName --namespace someNamespace --api-versions v1 --api-versions apiregistration.k8s.io/v1 --api-versions apiregistration.k8s.io/v1beta1 --api-versions extensions/v1beta1 --api-versions apps/v1 --api-versions apps/v1beta2 --api-versions apps/v1beta1 --api-versions events.k8s.io/v1beta1 --api-versions authentication.k8s.io/v1 --api-versions authentication.k8s.io/v1beta1 --api-versions authorization.k8s.io/v1 --api-versions authorization.k8s.io/v1beta1 --api-versions autoscaling/v1 --api-versions autoscaling/v2beta1 --api-versions autoscaling/v2beta2 --api-versions batch/v1 --api-versions batch/v1beta1 --api-versions batch/v2alpha1 --api-versions certificates.k8s.io/v1beta1 --api-versions networking.k8s.io/v1 --api-versions policy/v1beta1 --api-versions rbac.authorization.k8s.io/v1 --api-versions rbac.authorization.k8s.io/v1beta1 --api-versions storage.k8s.io/v1 --api-versions storage.k8s.io/v1beta1 --api-versions admissionregistration.k8s.io/v1beta1 --api-versions admissionregistration.k8s.io/v1alpha1 --api-versions apiextensions.k8s.io/v1beta1 --api-versions scheduling.k8s.io/v1beta1 --api-versions coordination.k8s.io/v1beta1 --api-versions servicecatalog.k8s.io/v1beta1 --api-versions clickhouse.altinity.com/v1 --api-versions clusterhealth.ibm.com/v1 --api-versions comcast.github.io/v1 --api-versions helm.fluxcd.io/v1 --api-versions icp.ibm.com/v1 --api-versions monitoring.coreos.com/v1 --api-versions monitoringcontroller.cloud.ibm.com/v1 --api-versions oidc.security.ibm.com/v1 --api-versions velero.io/v1 --api-versions volumesnapshot.external-storage.k8s.io/v1 --api-versions argoproj.io/v1alpha1 --api-versions audit.policies.ibm.com/v1alpha1 --api-versions authentication.istio.io/v1alpha1 --api-versions bitnami.com/v1alpha1 --api-versions certmanager.k8s.io/v1alpha1 --api-versions forecastle.stakater.com/v1alpha1 --api-versions iam.policies.ibm.com/v1alpha1 --api-versions kubeapps.com/v1alpha1 --api-versions openebs.io/v1alpha1 --api-versions policies.ibm.com/v1alpha1 --api-versions rbac.istio.io/v1alpha1 --api-versions config.istio.io/v1alpha2 --api-versions networking.istio.io/v1alpha3 --api-versions admission.certmanager.k8s.io/v1beta1 --api-versions securityenforcement.admission.cloud.ibm.com/v1beta1 --api-versions metrics.k8s.io/v1beta1 --include-crds]" dir="/tmp/https:__myRemoteRepName" operation_name="exec helm" time_ms=63.299456000000006
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 3
- Comments: 15 (8 by maintainers)
Addendum: There’s an open PR already: https://github.com/helm/helm/pull/9040
Thanks for this heads-up @tiagomeireles. We’ll start working on integrating Helm v3.6 and support passing
--kube-version
flag soon.Hate to lower everyone’s expectation here, but I think the earliest release this comes with is v2.1, and not with a patch release of v2.0. We had bad experiences upgrading tools to new minor versions in previous releases, especially with Kustomize.
helm/helm#9040 has merged and helm 3.6.0 has been released which should unblock fixing this issue.
I think the problem here is, that Helm would assume a connection to the cluster when rendering the Chart to set the value of the
.Capabilities.KubeVersion
built-in object. Thehelm template
command allows us to specifically set the.Capabilities.APIVersions
using command line options, but there is no such way to manually set the Kubernetes version.The manifests are being rendered by the
argocd-repo-server
, and this component does not (and should not) have connection permissions to any Kubernetes cluster. Sohelm template
isn’t able to set the Kubernetes version of the target cluster. I guess the correct fix would be in upstream Helm, to allow specifying the target Kubernetes version for rendering the chart without having a connection to the cluster, as this seems not to be possible at the moment. I’ve seen an issue on GitHub (https://github.com/helm/helm/issues/9431#issuecomment-792785470) where they don’t seem to oppose to such a flag being integrated with thehelm template
command.@mmerrill3 Argo CD v2 changed default version of Helm to v3 - so unless you explicitly have
.spec.helm.version
set to2
, it is probably rendered using Helm v3 (and thus, passes a static version identifier for the target cluster’s K8s version).@rcjames You can have a look at our v2.1 release milestone, it now has an (approx) release date attached to it.
As for the docs, it’s not yet visible in the stable docs, but we have recently added something to the docs here: https://argo-cd.readthedocs.io/en/latest/developer-guide/release-process-and-cadence/
I think using the CLI argument
--validate
would do the trick:Rendering then takes longer because helm contacts the API server and validates the manifest against it. But capabilities are then set correctly.
To test this, I used the argo-cd helm chart and added this annotation to a random resource:
I then used a KinD 1.16 cluster
Rendering without specifying
--validate
looks like this:And with
--validate present
: