istio: Any HTTP service will block all HTTPS/TCP traffic on the same port
(Assumes ALLOW_ANY mode)
Normally, traffic to external HTTPS services works due to the ALLOW_ANY mode changes we added. However, when an http service is added on port 443 (or any port, but generally this happens on 443), this breaks. This is because we add a new 0.0.0.0_443 listener, which will match everything and direct to routes. This will fail.
To reproduce:
curl https://www.google.com # works
then
cat <<EOF | kaf -
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: https-breaker
spec:
hosts:
- wikipedia.org
location: MESH_EXTERNAL
ports:
- number: 443
name: http
protocol: HTTP
resolution: DNS
EOF
now
$ curl https://www.google.com
curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol.
One possible idea is to do similar to protocol sniffing and detect TLS connection and direct to passthrough cluster. This may be possible with tls inspector.
Related issues:
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 18
- Comments: 56 (37 by maintainers)
Commits related to this issue
- Reject HTTP listeners on port 443 This is not a complete fix for https://github.com/istio/istio/issues/16458 but does help resolve some of the common error cases. — committed to howardjohn/istio by howardjohn 5 years ago
- Reject HTTP listeners on port 443 (#17515) * Reject HTTP listeners on port 443 This is not a complete fix for https://github.com/istio/istio/issues/16458 but does help resolve some of the common err... — committed to istio/istio by howardjohn 5 years ago
- Reject HTTP listeners on port 443 (#17515) * Reject HTTP listeners on port 443 This is not a complete fix for https://github.com/istio/istio/issues/16458 but does help resolve some of the common err... — committed to howardjohn/istio by howardjohn 5 years ago
- Reject HTTP listeners on port 443 (#17515) (#18540) * Reject HTTP listeners on port 443 This is not a complete fix for https://github.com/istio/istio/issues/16458 but does help resolve some of the c... — committed to istio/istio by howardjohn 5 years ago
- Remove HTTP on 443 hack, extend testing Fixes https://github.com/istio/istio/issues/16458 In a recent PR we fixed this bug, so now this code is no longer needed. to verify this, I extended our testi... — committed to howardjohn/istio by howardjohn 4 years ago
- Remove HTTP on 443 hack, extend testing (#22707) * Remove HTTP on 443 hack, extend testing Fixes https://github.com/istio/istio/issues/16458 In a recent PR we fixed this bug, so now this code is no... — committed to istio/istio by howardjohn 4 years ago
We are seeing similar behavior. We have a Kubernetes
Service
deployed in our cluster that has a port listening on port 443, and the port is namedgrpc
. When that Service is present, any pod that has the istio sidecar injected is unable to make outbound requests.But if I rename the Service’s port from
grpc
tohttps-grpc
, then it works, and the same curl command above succeeds.So it’s not just naming the port
http
that triggers this issue.I need to +1 this issue.
Specifically what we are seeing in our environment is whenever there is a service of type http defined anywhere in the mesh then you cannot access anything outside the mesh on that same port when it is TLS.
As an example our outbound proxies listen on port 8080 which many services in the mesh like to listen on as well. And those services are correctly named as http-<whatever>. Now when a pod tries to utilize outboundproxy.company.com:8080 (via http(s)_proxy env vars already set) it only works for reaching http sites (and the Server: header in the traffic confirms that some service’s envoy intercepted it, but it still worked). Accessing an https site fails.
I’m using the proxy example here, but this means that any user in a cluster can define a service of type http on any port and that will break anyone from accessing external services on that same port if they are TLS.
Having to manage ServiceEntries for this is not a manageable solution. Even more so because “resolution: DNS” does not appear to be working either – we’re having to add the specific target IPs into the addresses instead of simply specifying the hostname in the hosts: section.
We’ve also experimented with includeIPRanges on the helm install and while that sets the range on the initContainer’s istio-iptables-go command it still does not make the change functional. However, if we set it with a includeOutboundIPRanges annotation it /does/ work. Because the --set option is seemingly not functional, having to dictate that every deployment must include this annotation is also not ideal.
Note: We are using istio-cni and have this behavior with with 1.4.4 and 1.4.5.
@vadimeisenbergibm my understanding is if you try to talk to an https service in-mesh it should work, because we will generate a listener with the SNI match that will match before it matches the HTTP routes (Which is where the issue arises). Untested though.
Its fine to say that you can’t have http on port 443, but if that is the case we should explicitly block it – otherwise one service in some random namespace can destroy a whole cluster
There is not any mitigation that I know of right now other than:
Are there any mitigations for this? We need to be able to prevent people from accidentally breaking all services running istio in the cluster just by adding a kubernetes service.
I meet a similar problem at 1.5.0
headless service’s port name without protocol-suffix will block external https traffic on the same port;
It looks like headdless service without protocol-suffix will create lds
0.0.0.0_port
. this lds will not properly handle externl https.I can reproduce it.
headless service yaml as follow:
the LDS is as follow, If I changed headdless service’s port name with protocol-suffix (eg. tcp,http,https) , LDS will disappear and external https will passthrough
Yes and we added a sniffing part to the wildcard listeners, if tcp is matched we just pass through
On Thu, Apr 2, 2020, 6:31 AM Rama Chavali notifications@github.com wrote:
I’m a little confused on today’s updates… is this going to fix the issue of outbound TLS connections where it happens to be on the same port as any http-* named service in the mesh? Port number does not matter here…
@yxue for these we just need to send to passthrough cluster though, since this is just for external ALLOW_ANY traffic. If a user has an explicitly defined https service it should still work because it will have a filter chain match on the SNI name, right?