envoy: Address field in listener not working (upstream connect error or disconnect/reset before headers)

Am not sure if this is related to #326 , still referencing the issue since it has the same error message, but on the face of it they seem to be different.

I’m trying to get the address field in a listener to work and I’m unable to do that. I’ve written a simple shell script as a test harness that will do the following - (this will work only on linux) -

  1. Spins up an envoy container named envoyct1 with a default config and installs curl and python packages in it.
  2. Uses nsenter to plumb two hardcoded IPs on eth0 of the envoy container that is spun up. These two IPs are simulations for two VIPs.
  3. Copies over a new envoy config that has two listeners configured with the two IPs, on port 80.
  4. Spins up a backend python server on port 9001 for service/1 backend.

When I docker exec into envoyct1, and fire curl <VIP1>/service/1, I expect to get a 404. But I see this error -

bash-4.3# curl 192.45.67.90/service/1
upstream connect error or disconnect/reset before headersbash-4.3#

If I spin up a python server on a different port, and curl the IP:port, it works -

bash-4.3# python -m SimpleHTTPServer 9002
Serving HTTP on 0.0.0.0 port 9002 ...
192.45.67.90 - - [22/Apr/2017 01:44:27] "GET / HTTP/1.1" 200 -

bash-4.3# curl 192.45.67.90:9002
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li><a href=".dockerenv">.dockerenv</a>
<li><a href="bin/">bin/</a>
<li><a href="dev/">dev/</a>
<li><a href="etc/">etc/</a>
<li><a href="home/">home/</a>
<li><a href="lib/">lib/</a>
<li><a href="lib64/">lib64/</a>
<li><a href="media/">media/</a>
<li><a href="mnt/">mnt/</a>
<li><a href="proc/">proc/</a>
<li><a href="root/">root/</a>
<li><a href="run/">run/</a>
<li><a href="sbin/">sbin/</a>
<li><a href="srv/">srv/</a>
<li><a href="sys/">sys/</a>
<li><a href="tmp/">tmp/</a>
<li><a href="usr/">usr/</a>
<li><a href="var/">var/</a>
</ul>
<hr>
</body>
</html>
bash-4.3#

So this doesn’t look like a network configuration issue (the curl is being issued from inside the envoy container).

Is this an envoy config issue? Or some other?

When I tried to debug this using gdb and a debug envoy build, it looked like a worker thread that handles the connection request somewhere in the connection_manager_impl.cc chain sees a socket close event and so spits out this error. I’m not sure why it should see a socket close event…

Am I doing something wrong with the config? Can someone please take a look?

BTW, it doesn’t matter if I have one or two listeners in my config file. It’s the same result. Also, it doesn’t matter whether I plumb the VIPs or not - using a simple 127.0.0.10 loopback IP yields the same result.

I’m attaching the harness as a zip file. Unzip it and simply run ./setup_ifaces.sh, and it’ll spin up an envoy alpine container and do the rest of the plumbing. If you fire ./setup_ifaces.sh ubuntu, it will pull the lyft/envoy ubuntu image instead and do the same stuff there.

So basically, this happens across ubuntu/alpine, loopback/eth0. Any pointers/help would be much appreciated.

Thanks!

setup_envoy_multiple_listener.zip

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 19 (9 by maintainers)

Commits related to this issue

Most upvoted comments

I resolved my issue by removing http2_protocol_options: {}

In my istio 0.5.1, there is no http2_protocol_options: {} at all.

kubectl exec -ti istio-pilot-676d495bf8-9c2px -c istio-proxy -n istio-system – cat /etc/istio/proxy/envoy_pilot.json { “listeners”: [ { “address”: “tcp://0.0.0.0:15003”, “name”: “tcp_0.0.0.0_15003”, “filters”: [ { “type”: “read”, “name”: “tcp_proxy”, “config”: { “stat_prefix”: “tcp”, “route_config”: { “routes”: [ { “cluster”: “in.8080” } ] } } } ], “bind_to_port”: true } ], “admin”: { “access_log_path”: “/dev/stdout”, “address”: “tcp://127.0.0.1:15000” }, “cluster_manager”: { “clusters”: [ { “name”: “in.8080”, “connect_timeout_ms”: 1000, “type”: “static”, “lb_type”: “round_robin”, “hosts”: [ { “url”: “tcp://127.0.0.1:8080” } ] } ] } }

I’m running into the same issue today.

I can access the service from the container using curl but not able to accesss through the envoy container via http://localhost:10000/symphony My envoy.yaml

tatic_resources: listeners:

  • address: socket_address: address: 0.0.0.0 port_value: 10000 filter_chains:
    • filters:
      • name: envoy.http_connection_manager config: codec_type: auto stat_prefix: ingress_http route_config: name: local_route virtual_hosts: - name: backend domains: - “*” routes: - match: prefix: “/symphony” route: cluster: symphony - match: prefix: “/service/2” route: cluster: service2 http_filters:
        • name: envoy.router config: {} clusters:
  • name: symphony connect_timeout: 0.25s type: STATIC lb_policy: round_robin http2_protocol_options: {} hosts:
    • socket_address: address: 10.129.16.178 port_value: 8080
  • name: service2 connect_timeout: 0.25s type: strict_dns lb_policy: round_robin http2_protocol_options: {} hosts:
    • socket_address: address: service2 port_value: 80 admin: access_log_path: “/dev/null” address: socket_address: address: 0.0.0.0 port_value: 1

@vijayendrabvs I am running into the same problem. My golang service is accessible from within the service container on port 9096 but not accessible through the envoy front-proxy container, with exactly the same response as you reported.

Can you provide any details on the resolution please?