argo-cd: Deploying from a helm repo to ArgoCD in a proxy environment results in timeout as Argo adds the stable repo by default

Checklist:

  • I’ve searched in the docs and FAQ for my answer: http://bit.ly/argocd-faq.
  • I’ve included steps to reproduce the bug.
  • I’ve pasted the output of argocd version.

Describe the bug

Currently in a proxy environment, when I try to deploy an app from a private helm repo, ArgoCD times out and fails. Following are my findings as to why it does that:

Argo repo-servers runs a helm init --client-only --skip-refresh. This adds the stable repo with URL: https://kubernetes-charts.storage.googleapis.com along with the private repo I added to argo which is hosted in artifactory. Access to the private repo doesn’t need proxy but since Argo repo-server pod doesn’t have proxy vars set up, it can’t reach the stable repo when it runs a helm repo update and hence times out and throws out an error.

Logs attached below from repo-server pod.

To Reproduce

In a proxy environment, deploy an app from ArgoCD UI or using a declarative approach (application resource) from a private helm repo that is accessible from the cluster without proxy.

Expected behavior

Argo repo-server shouldn’t add the stable repo (https://kubernetes-charts.storage.googleapis.com) by default as in a proxy environment it won’t have access to it and hence would time-out doing a helm repo update.

Screenshots

If applicable, add screenshots to help explain your problem.

Version

$ argocd version
argocd: v1.3.0+9f8608c
  BuildDate: 2019-11-13T01:49:01Z
  GitCommit: 9f8608c9fcb2a1d8dcc06eeadd57e5c0334c5800
  GitTreeState: clean
  GoVersion: go1.12.6
  Compiler: gc
  Platform: linux/amd64

Logs

From repo-server when I try to create an app from a helm repo

time="2019-12-05T19:59:04Z" level=info msg="manifest cache miss: 0.1.43/&ApplicationSource{RepoURL:https://artifactory.xxxxxxxxxxxxxxx,Path:,TargetRevision:0.1.43,Helm:nil,Kustomize:nil,Ksonnet:nil,Directory:nil,Plugin:nil,Chart:catalog,}"
time="2019-12-05T19:59:12Z" level=error msg="`helm repo update` failed timeout after 1m30s" execID=KOsLx
time="2019-12-05T19:59:12Z" level=error msg="finished unary call with code Unknown" error="`helm repo update` failed timeout after 1m30s" grpc.code=Unknown grpc.method=GetAppDetails grpc.request.deadline="2019-12-05T19:58:42Z" grpc.service=repository.RepoServerService grpc.start_time="2019-12-05T19:57:42Z" grpc.time_ms=90061.29 span.kind=server system=grpc
time="2019-12-05T19:59:12Z" level=info msg="helm init --client-only --skip-refresh" dir="/tmp/https:__artifactory.xxxxxxxxxxx" execID=GQGCe
time="2019-12-05T19:59:12Z" level=info msg="helm repo update" dir="/tmp/https:__artifactory.xxxxxxxxxxx" execID=W41qf
time="2019-12-05T19:59:13Z" level=info msg="manifest cache hit: &ApplicationSource{RepoURL:http://git.xxxxxxxxxxxxxx/catalog,TargetRevision:HEAD,Helm:&ApplicationSourceHelm{ValueFiles:[values.yaml],Parameters:[{apiserver.storage.etcd.persistence.enabled false false} {image quay.io/kubernetes-service-catalog/service-catalog:v0.2.1 false} {apiserver.replicas 1 false}],ReleaseName:service-catalog,Values:,},Kustomize:nil,Ksonnet:nil,Directory:nil,Plugin:nil,Chart:,}/2c9adc2df1596483711655aa1e5186e6b737992d"
time="2019-12-05T19:59:13Z" level=info msg="finished unary call with code OK" grpc.code=OK grpc.method=GenerateManifest grpc.request.deadline="2019-12-05T20:00:12Z" grpc.service=repository.RepoServerService grpc.start_time="2019-12-05T19:59:12Z" grpc.time_ms=791.33 span.kind=server system=grpc

Going into the repo server pod and running the same commands confirms this:

$ kubectl exec -it argocd-repo-server-6f4b75f4bb-q2f9x -n argocd  bash
$ helm init --client-only --skip-refresh
Creating /home/argocd/.helm
Creating /home/argocd/.helm/repository
Creating /home/argocd/.helm/repository/cache
Creating /home/argocd/.helm/repository/local
Creating /home/argocd/.helm/plugins
Creating /home/argocd/.helm/starters
Creating /home/argocd/.helm/cache/archive
Creating /home/argocd/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/argocd/.helm.
Not installing Tiller due to 'client-only' flag having been set
argocd@argocd-repo-server-6f4b75f4bb-q2f9x:~$ helm repo add <redacted>
"xx-helm-dev" has been added to your repositories
argocd@argocd-repo-server-6f4b75f4bb-q2f9x:~$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "xx-helm-dev" chart repository
...Unable to get an update from the "stable" chart repository (https://kubernetes-charts.storage.googleapis.com):
        Get https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp 216.58.196.80:443: connect: connection timed out
Update Complete.

About this issue

  • Original URL
  • State: open
  • Created 5 years ago
  • Reactions: 8
  • Comments: 27 (20 by maintainers)

Most upvoted comments

@afanrasool I trick my setup with setting the unix environment variable that are normally picked up by any library… therefore this is absolutely not a problem for me. Maybe we shall have a documentation’s patch. Have time for it? The below goes to the deploy resources of: argocd-repo-server,argocd-application-controller,argocd-server

env:
- name: HTTP_PROXY
  value: http://XXX-dsi-proxy:3128
- name: HTTPS_PROXY
  value: http://XXX-dsi-proxy:3128
- name: NO_PROXY
  value: 127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.cluster.local,argocd-repo-server,.test.dsp.XXX.de,.shared.dsp.XXX.de,.dst.kube

@zeph - Thank you for the workaround you posted above!

I’ve been trying to use Argo CD in our environment with Helm like this, following some pointers at Continuous Delivery with Helm and Argo CD:

  • Argo CD Application points to an “internal” Git repository
  • Internal Git repository contains Helm chart files.
  • Git Helm chart is a “live” chart reflecting exactly the values etc. to deploy in the cluster.
  • Git Helm chart has dependencies pointing to a shared “base” subchart in an “external” Helm chart repository

Our environment requires an HTTP proxy to reach the external Helm chart repository but not the internal Git repository. (This is just due to a mixture of SaaS and on-prem that our company happens to use.)

I was not able to get this to work without your workaround (adding proxy-related env vars to argocd-repo-server).

Note, I could use Argo CD’s Repositories feature to define a Helm-type Repository and give it a Proxy URL. But the Proxy URL only worked if I then used the Repository to create a Helm-type Application, which isn’t really what I wanted to do. When Argo CD processes Helm charts recursively via dependencies, it does a helm repo add command, and that helm repo add correctly uses authentication configured in the Repository (username and password) but not the Proxy URL configured in the same Repository.

It seems like Argo CD is close to handling this - if Argo CD could pick up on that existing Proxy setting in the Repository, when processing Helm chart dependencies.

@zeph PR #3063 is of course very useful if you want to retrieve helm charts from the internet via a forward-proxy. However, imho additionally it is very good to disable the default helm repo if you don’t need it. maintaining a non_proxy list is very error-prone from my experience, so I do not want to have it if I don’t need it.

@jkleinlercher Thanks for the feedback. I‘m going to submit a corresponding PR in the next couple of days.

I’d suggest that we add no repos by default. This is because the Helm stable repo is no longer recommend way to distribute you app.

This should probably done with the Helm v3 #2383 support.

Would you like to submit a PR to fix this?