ingress-nginx: Installation helm chart failed with "error getting secret"

kubernetes version: v1.20.5

I set up the cluster with kubeadm on VirtualBox where there are 2 instances corresponding to 2 nodes(master and worker1). When I am trying to install ingress-nginx helm chart the way explained here: https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx#install-chart However installation failed with timeout:

helm install ingress-nginx ingress-nginx/ingress-nginx --debug
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/yasuki/.cache/helm/repository/ingress-nginx-3.29.0.tgz

client.go:282: [debug] Starting delete for "ingress-nginx-admission" ServiceAccount
client.go:122: [debug] creating 1 resource(s)
client.go:282: [debug] Starting delete for "ingress-nginx-admission" ClusterRole
client.go:122: [debug] creating 1 resource(s)
client.go:282: [debug] Starting delete for "ingress-nginx-admission" ClusterRoleBinding
client.go:122: [debug] creating 1 resource(s)
client.go:282: [debug] Starting delete for "ingress-nginx-admission" Role
client.go:122: [debug] creating 1 resource(s)
client.go:282: [debug] Starting delete for "ingress-nginx-admission" RoleBinding
client.go:122: [debug] creating 1 resource(s)
client.go:282: [debug] Starting delete for "ingress-nginx-admission-create" Job
client.go:122: [debug] creating 1 resource(s)
client.go:491: [debug] Watching for changes to Job ingress-nginx-admission-create with timeout of 5m0s
client.go:519: [debug] Add/Modify event for ingress-nginx-admission-create: ADDED
client.go:558: [debug] ingress-nginx-admission-create: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:519: [debug] Add/Modify event for ingress-nginx-admission-create: MODIFIED
client.go:558: [debug] ingress-nginx-admission-create: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
Error: failed pre-install: timed out waiting for the condition
helm.go:81: [debug] failed pre-install: timed out waiting for the condition

And the log show a secret-related error:

1 client_config.go:608] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
{"err":"Get \"https://10.96.0.1:443/api/v1/namespaces/default/secrets/ingress-nginx-admission\": dial tcp 10.96.0.1:443: i/o timeout","level":"fatal","msg":"error getting secret","source":"k8s/k8s.go:109","time":"2021-04-10T18:39:44Z"}

kubectl get svc outputs:

NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   39m

kubectl describe nodes outputs:

Name:               master
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=master
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"f6:61:f8:13:e2:69"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 10.0.2.15
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 11 Apr 2021 03:18:33 +0900
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  master
  AcquireTime:     <unset>
  RenewTime:       Sun, 11 Apr 2021 03:55:58 +0900
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Sun, 11 Apr 2021 03:19:32 +0900   Sun, 11 Apr 2021 03:19:32 +0900   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Sun, 11 Apr 2021 03:54:07 +0900   Sun, 11 Apr 2021 03:18:29 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sun, 11 Apr 2021 03:54:07 +0900   Sun, 11 Apr 2021 03:18:29 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sun, 11 Apr 2021 03:54:07 +0900   Sun, 11 Apr 2021 03:18:29 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sun, 11 Apr 2021 03:54:07 +0900   Sun, 11 Apr 2021 03:18:50 +0900   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.56.101
  Hostname:    master
Capacity:
  cpu:                2
  ephemeral-storage:  19992176Ki
  hugepages-2Mi:      0
  memory:             3063872Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  18424789372
  hugepages-2Mi:      0
  memory:             2961472Ki
  pods:               110
System Info:
  Machine ID:                 42675530ea604c3dbce4842b381fc623
  System UUID:                0ddfb16b-1408-4d2f-800d-3ad209b350eb
  Boot ID:                    2e0c3a3b-43c3-4791-88db-42d3e3bf9663
  Kernel Version:             5.8.0-48-generic
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.8
  Kubelet Version:            v1.20.5
  Kube-Proxy Version:         v1.20.5
PodCIDR:                      10.114.0.0/24
PodCIDRs:                     10.114.0.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                              ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-74ff55c5b-bdgbm           100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     37m
  kube-system                 coredns-74ff55c5b-kbnrj           100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     37m
  kube-system                 etcd-master                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         37m
  kube-system                 kube-apiserver-master             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37m
  kube-system                 kube-controller-manager-master    200m (10%)    0 (0%)      0 (0%)           0 (0%)         37m
  kube-system                 kube-flannel-ds-jhwb8             100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      36m
  kube-system                 kube-proxy-mxndj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         37m
  kube-system                 kube-scheduler-master             100m (5%)     0 (0%)      0 (0%)           0 (0%)         37m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                950m (47%)   100m (5%)
  memory             290Mi (10%)  390Mi (13%)
  ephemeral-storage  100Mi (0%)   0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:
  Type    Reason                   Age                From        Message
  ----    ------                   ----               ----        -------
  Normal  NodeHasSufficientPID     39m (x5 over 39m)  kubelet     Node master status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientMemory  39m (x6 over 39m)  kubelet     Node master status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    39m (x6 over 39m)  kubelet     Node master status is now: NodeHasNoDiskPressure
  Normal  Starting                 37m                kubelet     Starting kubelet.
  Normal  NodeHasSufficientMemory  37m                kubelet     Node master status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    37m                kubelet     Node master status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     37m                kubelet     Node master status is now: NodeHasSufficientPID
  Normal  NodeNotReady             37m                kubelet     Node master status is now: NodeNotReady
  Normal  NodeAllocatableEnforced  37m                kubelet     Updated Node Allocatable limit across pods
  Normal  NodeReady                37m                kubelet     Node master status is now: NodeReady
  Normal  Starting                 36m                kube-proxy  Starting kube-proxy.


Name:               worker1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=worker1
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"5a:37:b2:c7:d2:5d"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 10.0.2.15
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 11 Apr 2021 03:21:20 +0900
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  worker1
  AcquireTime:     <unset>
  RenewTime:       Sun, 11 Apr 2021 03:55:58 +0900
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Sun, 11 Apr 2021 03:21:23 +0900   Sun, 11 Apr 2021 03:21:23 +0900   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Sun, 11 Apr 2021 03:51:38 +0900   Sun, 11 Apr 2021 03:21:20 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sun, 11 Apr 2021 03:51:38 +0900   Sun, 11 Apr 2021 03:21:20 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sun, 11 Apr 2021 03:51:38 +0900   Sun, 11 Apr 2021 03:21:20 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sun, 11 Apr 2021 03:51:38 +0900   Sun, 11 Apr 2021 03:21:30 +0900   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.56.102
  Hostname:    worker1
Capacity:
  cpu:                2
  ephemeral-storage:  9736500Ki
  hugepages-2Mi:      0
  memory:             2034756Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  8973158386
  hugepages-2Mi:      0
  memory:             1932356Ki
  pods:               110
System Info:
  Machine ID:                 6e21546b0c124aef9886fb71ff51b2fd
  System UUID:                5c637ac8-a3cc-4ab0-9363-79e6d69ebba6
  Boot ID:                    0694fe02-21dd-40c5-a307-2878b09b1e5c
  Kernel Version:             5.8.0-48-generic
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.8
  Kubelet Version:            v1.20.5
  Kube-Proxy Version:         v1.20.5
PodCIDR:                      10.114.1.0/24
PodCIDRs:                     10.114.1.0/24
Non-terminated Pods:          (2 in total)
  Namespace                   Name                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                     ------------  ----------  ---------------  -------------  ---
  kube-system                 kube-flannel-ds-69m5t    100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      34m
  kube-system                 kube-proxy-nx485         0 (0%)        0 (0%)      0 (0%)           0 (0%)         34m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (5%)  100m (5%)
  memory             50Mi (2%)  50Mi (2%)
  ephemeral-storage  0 (0%)     0 (0%)
  hugepages-2Mi      0 (0%)     0 (0%)
Events:
  Type    Reason                   Age   From        Message
  ----    ------                   ----  ----        -------
  Normal  Starting                 34m   kubelet     Starting kubelet.
  Normal  NodeHasSufficientMemory  34m   kubelet     Node worker1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    34m   kubelet     Node worker1 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     34m   kubelet     Node worker1 status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  34m   kubelet     Updated Node Allocatable limit across pods
  Normal  Starting                 34m   kube-proxy  Starting kube-proxy.
  Normal  NodeReady                34m   kubelet     Node worker1 status is now: NodeReady

helm ls outputs:

NAME         	NAMESPACE	REVISION	UPDATED                                	STATUS	CHART               	APP VERSION
ingress-nginx	default  	1       	2021-04-11 03:38:41.941837022 +0900 JST	failed	ingress-nginx-3.29.0	0.45.0

I’m in a recognition that all I have to do to install ingress-nginx is to run the command. Is this correct? Also how can I fix this error? Thank you in advance.

/triage support

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 16 (7 by maintainers)

Most upvoted comments

I had same issue where the nginx ingress was failing with the following error when the AKS cluster is behind the azure firewall

Here 192.168.0.1 is the kubernetes service IP which points to the Api server.

1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
{"err":"Get \"https://192.168.0.1:443/api/v1/namespaces/nginx-ingress/secrets/ingress-nginx-admission\": EOF","level":"fatal","msg":"error getting secret","source":"k8s/k8s.go:232","time":"2021-12-22T11:37:03Z"}

The way I resolved this was allowing port 443 to the API server IP address/FQDN. Firewall rule to open port 443 to *.azmk8s.io didn’t work.