istio: unable to preserve source ip

Tried the ISTIO_INBOUND_INTERCEPTION_MODE: TPROXY env var

and Annotations: sidecar.istio.io/interceptionMode=TPROXY

and made sure proxy runs as root… however, still see 127.0.0.1 as the source ip.

$ kubectl exec -it echoserver-fd4ff9bc9-zfxwh -c istio-proxy sh
# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 19:29 ?        00:00:00 /pause
root        55     0  0 19:29 ?        00:00:00 nginx: master process nginx -g daemon off;
nobody      60    55  0 19:29 ?        00:00:00 nginx: worker process
root        61     0  0 19:29 ?        00:00:00 /usr/local/bin/pilot-agent proxy sidecar --configPath /etc/istio/proxy --binaryPath /usr/local/bin/envoy --serviceCluster istio-proxy --drainDuration 45s 
root        75    61  0 19:29 ?        00:00:00 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster istio-proxy --s
root        85     0  0 19:30 pts/0    00:00:00 sh
root        89    85  0 19:30 pts/0    00:00:00 ps -ef
# exit

~/Downloads/istio-release-0.8-20180515-17-26/install/kubernetes ⌚ 15:29:52
$ curl 169.60.83.12:80/                                        
CLIENT VALUES:
client_address=127.0.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://169.60.83.12:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
cache-control=max-stale=0
content-length=0
host=169.60.83.12
user-agent=curl/7.54.0
x-b3-sampled=1
x-b3-spanid=b1398c24c1785342
x-b3-traceid=b1398c24c1785342
x-bluecoat-via=ccc09ce496fc2951
x-envoy-decorator-operation=guestbook
x-envoy-expected-rq-timeout-ms=15000
x-envoy-external-address=129.42.208.183
x-forwarded-for=9.27.120.57, 129.42.208.183
x-forwarded-proto=http
x-request-id=fbfaff74-7a05-91fb-9731-cd436a480956
BODY:
-no body in request-%                                                                                                                                                                                     

~/Downloads/istio-release-0.8-20180515-17-26/install/kubernetes ⌚ 15:30:10
$ kubectl get pods
NAME                                     READY     STATUS    RESTARTS   AGE
echoserver-fd4ff9bc9-zfxwh               2/2       Running   0          56s
guestbook-service-64f4fc5fbc-rd55j       2/2       Running   0          10d
guestbook-ui-7b48846f9-fgtt6             2/2       Running   0          10d
helloworld-service-v1-f4f4dfd56-cqr7z    2/2       Running   0          10d
helloworld-service-v2-78b9497478-cz64x   2/2       Running   0          10d
mysql-7b877b4cf4-z2nrl                   2/2       Running   0          15d
redis-848b98bc8b-h878m                   2/2       Running   0          15d

~/Downloads/istio-release-0.8-20180515-17-26/install/kubernetes ⌚ 15:30:13
$ kubectl describe pod echoserver-fd4ff9bc9-zfxwh 
Name:           echoserver-fd4ff9bc9-zfxwh
Namespace:      default
Node:           10.188.52.41/10.188.52.41
Start Time:     Thu, 17 May 2018 15:29:17 -0400
Labels:         pod-template-hash=980995675
                run=echoserver
Annotations:    sidecar.istio.io/interceptionMode=TPROXY
                sidecar.istio.io/status={"version":"c883147438ec6b276f8303e997b74ece3067ebb275c09015f195492aab8f445a","initContainers":["istio-init","enable-core-dump"],"containers":["istio-proxy"],"volumes":["istio-...
Status:         Running
IP:             172.30.53.20
Controlled By:  ReplicaSet/echoserver-fd4ff9bc9
Init Containers:
  istio-init:
    Container ID:  docker://fd6f1965f5da1e3b36ff4524a996a9731492351da812793a996e3f1f8246fd50
    Image:         gcr.io/istio-release/proxy_init:release-0.8-20180515-17-26
    Image ID:      docker-pullable://gcr.io/istio-release/proxy_init@sha256:a591ef52693e48885a1d47ee9a3f85c1fc2cf639bfb09c5b295b443e964d7f5e
    Port:          <none>
    Args:
      -p
      15001
      -i
      *
      -x
      
      -b
      8080,
      -d
      
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 17 May 2018 15:29:23 -0400
      Finished:     Thu, 17 May 2018 15:29:25 -0400
    Ready:          True
    Restart Count:  0
    Environment:
      ISTIO_META_INTERCEPTION_MODE:     TPROXY
      ISTIO_INBOUND_INTERCEPTION_MODE:  TPROXY
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lg1j3 (ro)
  enable-core-dump:
    Container ID:  docker://a5f756c96cb1f97a4b5e8f5651163e2119e035af0b3c8247159f6cdb7e524f6d
    Image:         gcr.io/istio-release/proxy_init:release-0.8-20180515-17-26
    Image ID:      docker-pullable://gcr.io/istio-release/proxy_init@sha256:a591ef52693e48885a1d47ee9a3f85c1fc2cf639bfb09c5b295b443e964d7f5e
    Port:          <none>
    Command:
      /bin/sh
    Args:
      -c
      sysctl -w kernel.core_pattern=/etc/istio/proxy/core.%e.%p.%t && ulimit -c unlimited
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 17 May 2018 15:29:26 -0400
      Finished:     Thu, 17 May 2018 15:29:27 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lg1j3 (ro)
Containers:
  echoserver:
    Container ID:   docker://3164cf612999fb5a7feb1a677551fc1578dc7cda00cac26143f3bfb2b7dc8365
    Image:          gcr.io/google_containers/echoserver:1.4
    Image ID:       docker-pullable://gcr.io/google_containers/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb
    Port:           8080/TCP
    State:          Running
      Started:      Thu, 17 May 2018 15:29:28 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lg1j3 (ro)
  istio-proxy:
    Container ID:  docker://47d7140c535af93cccbe6105338331028ca823abdaebbb6e3d31ae5bec87b606
    Image:         gcr.io/istio-release/proxyv2:release-0.8-20180515-17-26
    Image ID:      docker-pullable://gcr.io/istio-release/proxyv2@sha256:5f0836dfc280e0536d875a541e68ed512af73f62017c3c74f0f4981002ef601d
    Port:          <none>
    Args:
      proxy
      sidecar
      --configPath
      /etc/istio/proxy
      --binaryPath
      /usr/local/bin/envoy
      --serviceCluster
      istio-proxy
      --drainDuration
      45s
      --parentShutdownDuration
      1m0s
      --discoveryAddress
      istio-pilot.istio-system:15007
      --discoveryRefreshDelay
      10s
      --zipkinAddress
      zipkin.istio-system:9411
      --connectTimeout
      10s
      --statsdUdpAddress
      istio-statsd-prom-bridge.istio-system:9125
      --proxyAdminPort
      15000
      --controlPlaneAuthPolicy
      NONE
    State:          Running
      Started:      Thu, 17 May 2018 15:29:29 -0400
    Ready:          True
    Restart Count:  0
    Environment:
      POD_NAME:                         echoserver-fd4ff9bc9-zfxwh (v1:metadata.name)
      POD_NAMESPACE:                    default (v1:metadata.namespace)
      INSTANCE_IP:                       (v1:status.podIP)
      ISTIO_META_POD_NAME:              echoserver-fd4ff9bc9-zfxwh (v1:metadata.name)
      ISTIO_META_INTERCEPTION_MODE:     TPROXY
      ISTIO_INBOUND_INTERCEPTION_MODE:  TPROXY
    Mounts:
      /etc/certs/ from istio-certs (ro)
      /etc/istio/proxy from istio-envoy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lg1j3 (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  istio-envoy:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  Memory
  istio-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  istio.default
    Optional:    true
  default-token-lg1j3:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-lg1j3
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                   Message
  ----    ------                 ----  ----                   -------
  Normal  Scheduled              1m    default-scheduler      Successfully assigned echoserver-fd4ff9bc9-zfxwh to 10.188.52.41
  Normal  SuccessfulMountVolume  1m    kubelet, 10.188.52.41  MountVolume.SetUp succeeded for volume "istio-envoy"
  Normal  SuccessfulMountVolume  1m    kubelet, 10.188.52.41  MountVolume.SetUp succeeded for volume "istio-certs"
  Normal  SuccessfulMountVolume  1m    kubelet, 10.188.52.41  MountVolume.SetUp succeeded for volume "default-token-lg1j3"
  Normal  Pulled                 1m    kubelet, 10.188.52.41  Container image "gcr.io/istio-release/proxy_init:release-0.8-20180515-17-26" already present on machine
  Normal  Created                1m    kubelet, 10.188.52.41  Created container
  Normal  Started                59s   kubelet, 10.188.52.41  Started container
  Normal  Started                56s   kubelet, 10.188.52.41  Started container
  Normal  Pulled                 56s   kubelet, 10.188.52.41  Container image "gcr.io/istio-release/proxy_init:release-0.8-20180515-17-26" already present on machine
  Normal  Created                56s   kubelet, 10.188.52.41  Created container
  Normal  Pulled                 54s   kubelet, 10.188.52.41  Container image "gcr.io/google_containers/echoserver:1.4" already present on machine
  Normal  Created                54s   kubelet, 10.188.52.41  Created container
  Normal  Started                54s   kubelet, 10.188.52.41  Started container
  Normal  Pulled                 54s   kubelet, 10.188.52.41  Container image "gcr.io/istio-release/proxyv2:release-0.8-20180515-17-26" already present on machine
  Normal  Created                54s   kubelet, 10.188.52.41  Created container
  Normal  Started                53s   kubelet, 10.188.52.41  Started container

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 67 (53 by maintainers)

Most upvoted comments

I had a similar issue to get real client ip address within the mesh (not using ingress gateway). I have a scenario where podA talks to podB and podB wants to know the real ip address of podB. With istio, the communication looks like podA -> istio-proxy -> podB-k8s-service -> istio-proxy -> podB. Since istio-proxy terminates mtls connection and establishes a new connection to podB, podB sees client address as 127.0.0.1 as reported in the original description. Since istio also sanitizes the headers, X-Forwarded-For header is also removed. To workaround this scenario, I was able to use Envoy’s variable substitution (thanks @snowp from envoy slack channel for the tip) and add custom header to route with real client ip address as below

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: test-virtual-service
spec:
  gateways:
    - mesh
  hosts:
    - your.hostname.com
  http:
  - route:
    - destination:
        host: your.hostname.com
    headers:
      request:
        add:
          X-Real-Client-Ip: "%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%"

@rlenglet original_src did work, but extra iptables rule needed, for example:

iptables -t mangle -I OUTPUT 1 -s 127.0.0.1/32 ! -d 127.0.0.1/32 \
    -j MARK --set-xmark 0x539/0xffffffff

with listener filter original_src enabled, envoy will connect to local process with real client ip instead of 127.0.0.1, but resoponding packet will be routed through eth0 by default.

policy routing already set up in TPROXY mode, so only a rule is required.

if this solution is ok, i can submit a PR.

Source ip can be preserved with istio using proxy protocol and Envoy proxy https://pgaijin66.medium.com/preserving-source-ip-address-in-l4-loadbalancer-aws-do-using-istios-envoy-filter-and-proxy-52c5e2300342

@pgaijin66: Quick note that for AWS NLB you do get the source IP preserved by default; without needing any proxy protocol; up to the istio ingress. This issue wasn’t about that (afaik) it was about that after the envoy proxy, when traffic enters the service’s container, the IP is 127.0.0.1

@FrimIdan This issue is still opened. @gmemcc’s attempt to use original_src is a priori going in the right direction.

In fact, your istio-init args don’t contain any -m option. That’s odd, since the templates unconditionally set a -m arg: https://github.com/istio/istio/blob/master/pilot/pkg/kube/inject/mesh.go#L29 https://github.com/istio/istio/blob/master/install/kubernetes/helm/istio/charts/sidecar-injector/templates/configmap.yaml#L24 @linsun are you using automatic injection or istioctl? I would suspect that the injector’s template is broken.

istio-init should have a -m TPROXY arg if injected with that mode.