istio: Mesh internal calls break with TPROXY mode
Bug description
Under a default profile setup, the sample sleep
can call sample httpbin
but get the error message below for httpbin-tproxy
, a httpbin
with sidecar.istio.io/interceptionMode: TPROXY
annotation added. The full yaml is given under Steps to reproduce the bug section.
upstream connect error or disconnect/reset before headers. reset reason: connection failure
[ ] Docs [ ] Installation [x] Networking [ ] Performance and Scalability [ ] Extensions and Telemetry [ ] Security [ ] Test and Release [ ] User Experience [ ] Developer Infrastructure
Expected behavior
sleep
should get the same result for calling httpbin-tproxy
as httpbin
.
Steps to reproduce the bug I run the steps below in a single node Linode kubernetes cluster.
- Deploy the istio operator with
istioctl operator init
kubectl apply -f istio.yaml
The content ofistio.yaml
is shown below.
apiVersion: install.istio.io/v1alpha1
metadata:
namespace: istio-system
name: istio-control-plane
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
profile: default
kubectl label namespace default istio-injection=enabled
kubectl apply -f sleep.yaml
The sample sleep.yaml from is used.kubectl apply -f httpbin.yaml
The sample httpbin.yaml` is used.kubectl apply -f httpbin-tproxy.yaml
The content ofhttpbin-tproxy.yaml
is shown below.
apiVersion: v1
kind: ServiceAccount
metadata:
name: httpbin-tproxy
---
apiVersion: v1
kind: Service
metadata:
name: httpbin-tproxy
labels:
app: httpbin-tproxy
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: httpbin-tproxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin-tproxy
spec:
replicas: 1
selector:
matchLabels:
app: httpbin-tproxy
version: v1
template:
metadata:
annotations:
sidecar.istio.io/interceptionMode: TPROXY
labels:
app: httpbin-tproxy
version: v1
spec:
serviceAccountName: httpbin-tproxy
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
ports:
- containerPort: 80
- Run
kubectl exec -it $(kubectl get pods -l app=sleep -o jsonpath='{.items[0].metadata.name}') -c sleep -- curl httpbin.default:8000/ip
It returns the expected result.
{
"origin": "127.0.0.1"
}
- Run
kubectl exec -it $(kubectl get pods -l app=sleep -o jsonpath='{.items[0].metadata.name}') -c sleep -- curl httpbin-tproxy.default:8000/ip
It returns the error message below.
upstream connect error or disconnect/reset before headers. reset reason: connection failure
Version (include the output of istioctl version --remote
and kubectl version --short
and helm version
if you used Helm)
Istio version
client version: 1.6.10
control plane version: 1.6.10
data plane version: 1.6.10 (5 proxies)
Kubernetes version
Client Version: v1.19.2
Server Version: v1.18.8
How was Istio installed? Installed with the istio operator.
Environment where bug was observed (cloud vendor, OS, etc) Linode Kubernetes Engine
The dump file istio-dump.tar.gz
was created with ./dump_kubernetes.sh -n default -n istio-system -n istio-operator -z
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 18 (17 by maintainers)
same on 1.17.2 sad