istio: istio ingressgateway connection refused on port 31380

Describe the bug Not able to hit ingress gateway svc on port 31380. Getting connection refused error. I have deployed istio in our k8s 1.10.8 cluster, all the pods are up and running. Gateway svc got deployed with nodeport and the service is exposed with nodeport 31380. When I try to hit the service from the node or from the container getting connection refused error. [centos@--kubernetes-88.88.88.88 ~]$ curl http://88.88.88.88:31380/hello curl: (7) Failed connect to 88.88.88.88:31380; Connection refused

Expected behavior It should return hello version. [centos@myinstance ~]$ curl http://88.88.88.88:31380/hello Hello version: v2, instance: helloworld-v2-7dd57c44c4-rhd67

Steps to reproduce the bug Istio Deployment steps

curl -L https://git.io/getLatestIstio | sh -

kubectl --kubeconfig=config apply -f istio-1.0.3/install/kubernetes/helm/istio/templates/crds.yaml

helm template istio-1.0.3/install/kubernetes/helm/istio --name istio --namespace istio-system --set gateways.istio-ingressgateway.type=NodePort > istio.yaml

kubectl --kubeconfig=config create namespace istio-system

kubectl --kubeconfig=config apply -f istio.yaml

Version K8s : 1.10.8 istio 1.0.3

Installation Istio Deployment steps

curl -L https://git.io/getLatestIstio | sh -

kubectl --kubeconfig=config apply -f istio-1.0.3/install/kubernetes/helm/istio/templates/crds.yaml

helm template istio-1.0.3/install/kubernetes/helm/istio --name istio --namespace istio-system --set gateways.istio-ingressgateway.type=NodePort > istio.yaml

kubectl --kubeconfig=config create namespace istio-system

kubectl --kubeconfig=config apply -f istio.yaml

Environment CentOS

Cluster state AWS kubernetes cluster

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 7
  • Comments: 28 (1 by maintainers)

Most upvoted comments

You need to create a Gateway so istio ingress controller can bind to that port kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

I encounter the same issue.

I found it had worked one time, after I delete the pod to see if it can working again, then it’s not working anymore.

For the working case I see the following logs


[2018-11-22 06:52:27.094][26][info][config] external/envoy/source/server/listener_manager_impl.cc:908] all dependencies initialized. starting workers
[2018-11-22 06:53:27.095][26][info][main] external/envoy/source/server/drain_manager_impl.cc:63] shutting down parent after drain

//insert comment: here too much gap?

[2018-11-22 09:38:38.362][26][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:500] add/update cluster outbound|3000||grafana.istio-system.svc.cluster.local starting warming
[2018-11-22 09:38:38.363][26][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:512] warming cluster outbound|3000||grafana.istio-system.svc.cluster.local complete
[2018-11-22 09:38:38.376][26][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '0.0.0.0_80'

It’s almost cause three hours to show the listener setup log ( too long ).

For the second time I try, it never show the add/update listener log, and so it’s not working for ten hours now.

For the not working case, the log always stopped at shutting down parent after drain.

Version K8s : 1.11.3 istio 1.0.3

Same here. Istio 1.0.5 on Kubernetes 1.11.5 (EKS). The workaround so far is to add livenessProbe and readinessProbe on one of the ports for the ingress gateway deployment, so that the ingress gateway pod get restarted when it fails.