helmfile: Helmfile doesn't work when no internet access

Hi there,

we are using helmfile to deploy our stack to kubernetes on premise. I have downloaded charts offline and constructed:

context: kubernetes-admin@kubernetes                # kube-context (--kube-context)

releases:

  # Prometheus deployment
  - name: prom-helmf-ns-monitoring              # name of this release
    namespace: monitoring                       # target namespace
    chart: /opt/kubernetes/stable/prometheus                    # the chart being installed to create this release, referenced by `repository/chart` syntax
    values: ["values/values_prometheus_ns_monitoring.yaml"]
    set:                                        # values (--set)
      - name: rbac.create
        value: true
  # Grafana deployment
  - name: graf-helmf-ns-monitoring              # name of this release
    namespace: monitoring                       # target namespace
    chart: /opt/kubernetes/stable/grafana
    values: ["values/values_grafana_ns_monitoring.yaml"]

  # Controller pod (Nginx)
  - name: controller-pod-nginx                  # name of this release
    namespace: ingress-nginx                    # target namespace
    chart: /opt/kubernetes/stable/nginx-ingress                 # the chart being installed to create this release, referenced by `repository/chart` syntax
    values: ["values/values_nginx_ns_ingress-nginx.yaml"]
    set:                                        # values (--set)
      - name: rbac.create
        value: true

Everything is happening offline My kubernetes cluster is up and running

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init   --service-account tiller --skip-refresh

When I run helmfile:

 helmfile -f monitoring_deployment.yaml sync
exec: helm repo update --kube-context kubernetes-admin@kubernetes
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Unable to get an update from the "stable" chart repository (https://kubernetes-charts.storage.googleapis.com):
        Get https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp: lookup kubernetes-charts.storage.googleapis.com on 10.0.2.3:53: read udp 10.0.2.15:53788->10.0.2.3:53: i/o timeout
Update Complete. ⎈ Happy Helming!⎈
exec: helm dependency update /opt/kubernetes/stable/prometheus --kube-context kubernetes-admin@kubernetes
No requirements found in /opt/kubernetes/stable/prometheus/charts.
exec: helm dependency update /opt/kubernetes/stable/grafana --kube-context kubernetes-admin@kubernetes
No requirements found in /opt/kubernetes/stable/grafana/charts.
exec: helm dependency update /opt/kubernetes/stable/nginx-ingress --kube-context kubernetes-admin@kubernetes
No requirements found in /opt/kubernetes/stable/nginx-ingress/charts.
exec: helm upgrade --install --reset-values prom-helmf-ns-monitoring /opt/kubernetes/stable/prometheus --namespace monitoring --values /opt/kubernetes/monitoring/values/values_prometheus_ns_monitoring.yaml --set rbac.create=true --kube-context kubernetes-admin@kubernetes
exec: helm upgrade --install --reset-values controller-pod-nginx /opt/kubernetes/stable/nginx-ingress --namespace ingress-nginx --values /opt/kubernetes/monitoring/values/values_nginx_ns_ingress-nginx.yaml --set rbac.create=true --kube-context kubernetes-admin@kubernetes
exec: helm upgrade --install --reset-values graf-helmf-ns-monitoring /opt/kubernetes/stable/grafana --namespace monitoring --values /opt/kubernetes/monitoring/values/values_grafana_ns_monitoring.yaml --kube-context kubernetes-admin@kubernetes
Error: UPGRADE FAILED: "graf-helmf-ns-monitoring" has no deployed releases
Error: UPGRADE FAILED: "prom-helmf-ns-monitoring" has no deployed releases
Error: UPGRADE FAILED: "controller-pod-nginx" has no deployed releases
err: exit status 1
err: exit status 1
err: exit status 1

VERSIONS:

 helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

and

 helmfile --v
helmfile version v0.18.0

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 2
  • Comments: 26

Commits related to this issue

Most upvoted comments

@mumoshu we have found perhaps workaround

  1. remove these lines from: /root/.helm/repository/repositories.yaml so it will look like this in the end
 cat  /root/.helm/repository/repositories.yaml
apiVersion: v1
generated: 2018-05-30T22:05:42.031075224Z
repositories:
- caFile: ""
  cache: /root/.helm/repository/cache/local-index.yaml
  certFile: ""
  keyFile: ""
  name: local
  password: ""
  url: http://127.0.0.1:8879/charts
  username: ""
  1. always need to check heml list -a
    • because there might be some failing deployments - which in my case - there were exactly two
    • run helm delete <deployment_name> --purge

Note: I run helmfile -f monitoring_deployment.yaml sync many times and many deployments were failed.

Well so the point is:

if you are OFFLINE and you either try you will fail:

  1. helmfile -f monitoring_deployment.yaml delete keep in mind that you upcoming helmfile -f monitoring_deployment.yaml sync will FAIL because because you have some entries helm list -a You need to delete them manually!!!
  2. or if you run helmfile -f monitoring_deployment.yaml sync for the very first time offline and your deployment is going to fail from some reasons you HAVE to clean up these failed deployments first helm list -a before you going to execute again helmfile -f monitoring_deployment.yaml sync

Cleaning:

helm delete --purge <failed_deployment_name>

@mumoshu if you could document this workaround - it would be awesome 😃