istio: Upsteam error issue [503] since side-car-istio-proxy cannot connect to pilot

istio 0.8.0 (mTLS disabled, also no control plane security) k8s 1.9.5 cilium 1.1.0

Steps Redeploy a service with new version

Expect result: service can be accessed through istio-ingress

Actual result, It show 503 upstream error issue.

From attached side-car istio-proxy log, it show cannot connect to istio-pilot.

[2018-06-07 23:03:32.170][16][info][main] external/envoy/source/server/server.cc:396] starting main dispatch loop
[2018-06-07 23:03:37.170][16][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:235] gRPC config stream closed: 14, no healthy upstream
[2018-06-07 23:03:37.170][16][warning][upstream] 

UPSTREQM-istio-proxyerror.log

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 10
  • Comments: 49 (15 by maintainers)

Most upvoted comments

The logs seem to show that pilot closes the connection every 5 minutes - but it reconnects after immediately. It’s actually a feature ( an accidental one ) - so pilot connections gets re-balanced. We’re working on a better way to rebalance - and after we’ll fix this 5-min reconnect.

AFAIK it should not cause any problems.

i am facing this issue on 1.0.3,and i want to know why the pilot closes the connection every 5 minutes,and how i can solve it

I am facing the same issue, restarting pilot doesn’t seem to help.

I am facing a similar issue with 1.0.5. Even restarting the pilot pod didn’t solve my problem.

I just ran into this on Istio 1.0.3. Not sure if I saw this on previous versions. Deleting the istio-pilot pods seems to help, but is probably only a temporary fix. The pods were only a day old (upgraded Istio 1.0.2 -> 1.0.3 yesterday) and I didn’t notice anything obviously bad in the pilot dashboard. Perhaps the recent activity in this issue is people running 1.0.3?

Yeah, deleting pilot seems to fix issue, but it’s not really ideal 😄

Ps. running 1.0.3 as well.

I see very similar behavior with 1.0.4. Restarting the pilot pod solves the problem.

This solved my issue,

kubectl delete meshpolicy default

This does carry security implications and I assume a new meshpolicy will have to be defined - but it does stop the 503’s

I probably found the cause of the problem.

The reason

I found that the cluster in istio-proxy contains the Kubernetes pod IP that no longer exists. And I see from the error log of istio-proxy that all request traffic with an error of 503 UF is sent to this non-existing IP. So I think I found the reason for this problem.

How to solve this

In my case,i solve it by apply the destinationRule again,and istio sync the right cluster.

More question

Why isio still retains the pod ip that has been stopped? this issue my relate with #9480

The log

In my case,the cloudspidergateway have 2 pod in Kubernetes which ip is 10.244.25.4,10.244.14.34 ,but the istio-proxy think i have three pod (10.244.25.4,10.244.7.51,10.244.14.34)

the error log of istio-proxy

docker logs <istio-proxy container> |grep 503 |grep UF

{"log":"[2019-01-23T19:28:44.859Z] \"POST /cloudSpiderAccessor/v1/executeReport/taskId/ff808081662e858401687bfd7d51256fHTTP/1.1\" 503 UF 394 57 999 - \"-\" \"Java/1.8.0_181\" \"8c75b45d-5b51-98a8-b4f8-d378c675aae4\" \"cloudSpiderGateWay:9000\" \"10.244.7.51:9000\" outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local - 10.96.123.71:9000 10.244.13.6:41746\n","stream":"stdout","time":"2019-01-23T19:28:53.30835571Z"}
{"log":"[2019-01-23T19:28:55.415Z] \"POST /cloudSpiderAccessor/v1/executeReport/taskId/e4e4781d662e89310168770017726a6dHTTP/1.1\" 503 UF 464 57 506 - \"-\" \"Java/1.8.0_181\" \"0ae79043-9708-9545-9d8b-76c2482cef33\" \"cloudSpiderGateWay:9000\" \"10.244.7.51:9000\" outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local - 10.96.123.71:9000 10.244.13.6:41752\n","stream":"stdout","time":"2019-01-23T19:29:03.311469856Z"}
{"log":"[2019-01-23T19:29:09.249Z] \"POST /cloudSpiderAccessor/v1/executeReport/taskId/e4e4781d662e89310168770017726a6dHTTP/1.1\" 503 UF 466 57 495 - \"-\" \"Java/1.8.0_181\" \"a8592ead-409f-9a2a-a8c7-0ec6a25deb74\" \"cloudSpiderGateWay:9000\" \"10.244.7.51:9000\" outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local - 10.96.123.71:9000 10.244.13.6:41762\n","stream":"stdout","time":"2019-01-23T19:29:13.31274334Z"}
{"log":"[2019-01-23T19:29:27.814Z] \"POST /cloudSpiderAccessor/v1/executeReport/taskId/e4e4781d662e89310168770017726a6dHTTP/1.1\" 503 UF 469 57 570 - \"-\" \"Java/1.8.0_181\" \"1f52f1fd-81d8-9329-8fba-9783244bdff6\" \"cloudSpiderGateWay:9000\" \"10.244.7.51:9000\" outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local - 10.96.123.71:9000 10.244.13.6:41768\n","stream":"stdout","time":"2019-01-23T19:29:33.312063164Z"}
{"log":"[2019-01-23T19:29:38.026Z] \"POST /cloudSpiderAccessor/v1/executeReport/taskId/ff808081662e858401687bfd7d51256fHTTP/1.1\" 503 UF 394 57 1001 - \"-\" \"Java/1.8.0_181\" \"cc896a0a-4a1a-98ba-8daa-3e5354b31545\" \"cloudSpiderGateWay:9000\" \"10.244.7.51:9000\" outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local - 10.96.123.71:9000 10.244.13.6:41772\n","stream":"stdout","time":"2019-01-23T19:29:43.310374027Z"}
{"log":"[2019-01-23T19:29:46.382Z] \"POST /cloudSpiderAccessor/v1/executeReport/taskId/e4e4781d662e89310168770017726a6dHTTP/1.1\" 503 UF 461 57 719 - \"-\" \"Java/1.8.0_181\" \"f5fbcb7b-aa36-9f18-aad5-04e9c3fc9d62\" \"cloudSpiderGateWay:9000\" \"10.244.7.51:9000\" outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local - 10.96.123.71:9000 10.244.13.6:41778\n","stream":"stdout","time":"2019-01-23T19:29:53.310893339Z"}
{"log":"[2019-01-23T19:29:58.154Z] \"POST /cloudSpiderAccessor/v1/executeReport/taskId/e4e4781d662e89310168770017726a6dHTTP/1.1\" 503 UF 446 57 723 - \"-\" \"Java/1.8.0_181\" \"87c376c9-ad42-9c70-8e44-cdd4d31c76a5\" \"cloudSpiderGateWay:9000\" \"10.244.7.51:9000\" outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local - 10.96.123.71:9000 10.244.13.6:41784\n","stream":"stdout","time":"2019-01-23T19:30:03.309634823Z"}
{"log":"[2019-01-23T19:30:10.029Z] \"POST /cloudSpiderAccessor/v1/executeReport/taskId/e4e4781d662e89310168770017726a6dHTTP/1.1\" 503 UF 438 57 692 - \"-\" \"Java/1.8.0_181\" \"139f3e91-adfc-9f7a-a0ab-b64fe20b9127\" \"cloudSpiderGateWay:9000\" \"10.244.7.51:9000\" outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local - 10.96.123.71:9000 10.244.13.6:41792\n","stream":"stdout","time":"2019-01-23T19:30:13.312569876Z"}
{"log":"[2019-01-23T19:30:21.735Z] \"POST /cloudSpiderAccessor/v1/executeReport/taskId/e4e4781d662e89310168770017^C

The Kubernetes service describe

kubectl describe service cloudspidergateway -n cloudspider
Name:              cloudspidergateway
Namespace:         cloudspider
Labels:            app=cloudspidergateway
Annotations:       kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"cloudspidergateway"},"name":"cloudspidergate...
Selector:          app=cloudspidergateway
Type:              ClusterIP
IP:                10.96.123.71
Port:              http-9000-9000-ztdzg  9000/TCP
TargetPort:        9000/TCP
Endpoints:         10.244.14.34:9000,10.244.25.4:9000
Session Affinity:  None
Events:            <none>

the cluster info find in istio-proxy

curl 127.0.0.1:15000/clusters |grep cloudspidergateway

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::default_priority::max_connections::1024
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::default_priority::max_pending_requests::1024
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::default_priority::max_requests::1024
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::default_priority::max_retries::3
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::high_priority::max_connections::1024
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::high_priority::max_pending_requests::1024
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::high_priority::max_requests::1024
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::high_priority::max_retries::3
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::added_via_api::true
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::cx_active::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::cx_connect_fail::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::cx_total::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::rq_active::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::rq_error::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::rq_success::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::rq_timeout::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::rq_total::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::health_flags::healthy
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::weight::1
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::region::
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::zone::
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::sub_zone::
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::canary::false
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::success_rate::-1
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::cx_active::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::cx_connect_fail::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::cx_total::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::rq_active::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::rq_error::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::rq_success::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::rq_timeout::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::rq_total::0
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::health_flags::healthy
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::weight::1
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::region::
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::zone::
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::sub_zone::
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::canary::false
outbound|9000|v2|cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::success_rate::-1
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::default_priority::max_connections::1024
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::default_priority::max_pending_requests::1024
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::default_priority::max_requests::1024
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::default_priority::max_retries::3
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::high_priority::max_connections::1024
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::high_priority::max_pending_requests::1024
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::high_priority::max_requests::1024
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::high_priority::max_retries::3
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::added_via_api::true
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::cx_active::1
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::cx_connect_fail::1
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::cx_total::12330
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::rq_active::0
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::rq_error::1
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::rq_success::78050
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::rq_timeout::0
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::rq_total::78051
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::health_flags::healthy
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::weight::1
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::region::
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::zone::
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::sub_zone::
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::canary::false
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::success_rate::-1
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::cx_active::1
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::cx_connect_fail::0
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::cx_total::2990
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::rq_active::0
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::rq_error::2
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::rq_success::16921
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::rq_timeout::0
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::rq_total::16923
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::health_flags::healthy
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::weight::1
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::region::
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::zone::
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::sub_zone::
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::canary::false
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.25.4:9000::success_rate::-1
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::cx_active::0
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::cx_connect_fail::16933
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::cx_total::37860
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::rq_active::0
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::rq_error::16945
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::rq_success::138003
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::rq_timeout::0
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::rq_total::154948
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::health_flags::healthy
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::weight::1
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::region::
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::zone::
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::sub_zone::
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::canary::false
outbound|9000||cloudspidergateway.cloudspider.svc.cluster.local::10.244.7.51:9000::success_rate::-1
outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::default_priority::max_connections::1024
outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::default_priority::max_pending_requests::1024
outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::default_priority::max_requests::1024
10outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::default_priority::max_retries::3
0  25outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::high_priority::max_connections::1024
1k    0 outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::high_priority::max_pending_requests::1024
 251k    0 outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::high_priority::max_requests::1024
    0  77.9outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::high_priority::max_retries::3
M      0outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::added_via_api::true
 --:--outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::cx_active::0
:-- --outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::cx_connect_fail::0
:--:-- outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::cx_total::0
--:--:-outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::rq_active::0
- 81.9outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::rq_error::0
M
outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::rq_success::0
outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::rq_timeout::0
outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::rq_total::0
outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::health_flags::healthy
outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::weight::1
outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::region::
outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::zone::
outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::sub_zone::
outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::canary::false
outbound|9000|v1|cloudspidergateway.cloudspider.svc.cluster.local::10.244.14.34:9000::success_rate::-1


I deploy consul as service registry using 1.0.4. Hit the same issue:

[2018-11-22 04:27:34.055][17][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
[2018-11-22 04:27:34.055][17][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream
[2018-11-22 04:27:34.055][17][info][config] external/envoy/source/server/listener_manager_impl.cc:908] all dependencies initialized. starting workers
[2018-11-22 04:27:37.058][17][info][main] external/envoy/source/server/drain_manager_impl.cc:63] shutting down parent after drain
[2018-11-22 04:27:42.899][17][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
[2018-11-22 04:27:42.899][17][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream
[2018-11-22 04:28:04.983][17][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
[2018-11-22 04:28:04.983][17][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream
[2018-11-22 04:28:19.070][17][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
[2018-11-22 04:28:19.070][17][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream


Updated: My pilot container is stopped, start it again and proxy works

is the log you attached from the istio-proxy by the pilot?

istio-proxy

Did you use ingress or gateway?

ingress

Could you please send logs from the gateway/ingress pods, as well as from pilot, also kubectl get all?