istio: Upsteam error issue [503] since side-car-istio-proxy cannot connect to pilot
istio 0.8.0 (mTLS disabled, also no control plane security) k8s 1.9.5 cilium 1.1.0
Steps Redeploy a service with new version
Expect result: service can be accessed through istio-ingress
Actual result, It show 503 upstream error issue.
From attached side-car istio-proxy log, it show cannot connect to istio-pilot.
[2018-06-07 23:03:32.170][16][info][main] external/envoy/source/server/server.cc:396] starting main dispatch loop
[2018-06-07 23:03:37.170][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:235] gRPC config stream closed: 14, no healthy upstream
[2018-06-07 23:03:37.170][16][warning][upstream]
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 10
- Comments: 49 (15 by maintainers)
i am facing this issue on 1.0.3,and i want to know why the pilot closes the connection every 5 minutes,and how i can solve it
I am facing the same issue, restarting pilot doesn’t seem to help.
I am facing a similar issue with 1.0.5. Even restarting the pilot pod didn’t solve my problem.
Yeah, deleting pilot seems to fix issue, but it’s not really ideal 😄
Ps. running 1.0.3 as well.
I see very similar behavior with 1.0.4. Restarting the pilot pod solves the problem.
This solved my issue,
kubectl delete meshpolicy default
This does carry security implications and I assume a new meshpolicy will have to be defined - but it does stop the 503’s
I probably found the cause of the problem.
The reason
I found that the cluster in istio-proxy contains the Kubernetes pod IP that no longer exists. And I see from the error log of istio-proxy that all request traffic with an error of 503 UF is sent to this non-existing IP. So I think I found the reason for this problem.
How to solve this
In my case,i solve it by apply the destinationRule again,and istio sync the right cluster.
More question
Why isio still retains the pod ip that has been stopped? this issue my relate with #9480
The log
In my case,the cloudspidergateway have 2 pod in Kubernetes which ip is 10.244.25.4,10.244.14.34 ,but the istio-proxy think i have three pod (10.244.25.4,10.244.7.51,10.244.14.34)
the error log of istio-proxy
The Kubernetes service describe
the cluster info find in istio-proxy
I deploy consul as service registry using 1.0.4. Hit the same issue:
Updated: My
pilot
container is stopped, start it again and proxy worksistio-proxy
ingress