linkerd2: Injected nginx ingress controller doesn't have access to the remote client IP

Bug Report

What is the issue?

We have a service that uses a GeoIP database to lookup the country of incoming requests. It’s currently not receiving the remote IPs.

After injecting an nginx ingress controller, log variables $remote_addr and $the_real_ip are set to 127.0.0.1, due to the incoming connections coming from the linkerd proxy. The header l5d_remote_ip is also not set so cannot be logged using $http_l5d_remote_ip.

If the ingress controller is not injected, $remote_addr and $the_real_ip are set to the correct remote IP

I expect this is due to the nginx ingress controller doing the ssl termination, therefore the linkerd proxy only sees a stream of encrypted TCP.

The ingress controller has:

spec:
  externalTrafficPolicy: Local
  type: LoadBalancer

All application ingresses are configured with the following annotation:

  annotations:
    nginx.ingress.kubernetes.io/upstream-vhost: $service_name.$namespace.svc.cluster.local

A basic diagram of the setup and flow:

Linkerd remote ip localhost

How can it be reproduced?

Inject an nginx ingress controller and configure the logs to contain $remote_addr and $the_real_ip https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/log-format/

Logs, error output, etc

{
    "time": "2019-09-07T09:59:23+00:00",
    "time_msec": "1567850363.157",
    "request_id": "2a90c1e7b2c8de78581ece7590a8eaaa",
    "l5d_remote_ip": "",
    "remote_addr": "127.0.0.1",
    "the_real_ip": "127.0.0.1",
    "remote_user": "",
    "request_proto": "HTTP/2.0",
    "method": "GET",
    "vhost": "api.myapp.example.com",
    "path": "/Debug/headers",
    "request_query": "",
    "request_length": 54,
    "request_duration": "0.010",
    "upstream_connect_time": "0.000",
    "status": "200",
    "upstream_status": "200",
    "response_body_bytes": "467",
    "upstream_name": "ns-myapp-http",
    "upstream_ip": "10.33.65.109:80",
    "upstream_response_time": "0.008",
    "upstream_response_length": "479",
    "http_referrer": "",
    "http_user_agent": "curl/7.58.0",
    "ingress_namespace": "ns",
    "ingress_name": "myapp",
    "service_name": "myapp",
    "service_port": "http"
}
{
    "time": "2019-09-07T10:12:05+00:00",
    "time_msec": "1567851125.208",
    "request_id": "bad9d67eb8517e2bf934845daef6af88",
    "l5d_remote_ip": "",
    "remote_addr": "1.2.3.4",
    "the_real_ip": "1.2.3.4",
    "remote_user": "",
    "request_proto": "HTTP/2.0",
    "method": "GET",
    "vhost": "api.myapp.example.com",
    "path": "/Debug/headers",
    "request_query": "",
    "request_length": 54,
    "request_duration": "0.005",
    "upstream_connect_time": "0.004",
    "status": "200",
    "upstream_status": "200",
    "response_body_bytes": "408",
    "upstream_name": "ns-myapp-http",
    "upstream_ip": "10.33.64.76:80",
    "upstream_response_time": "0.004",
    "upstream_response_length": "420",
    "http_referrer": "",
    "http_user_agent": "curl/7.58.0",
    "ingress_namespace": "ns",
    "ingress_name": "myapp",
    "service_name": "myapp",
    "service_port": "http"
}

linkerd check output

kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version

linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ control plane PodSecurityPolicies exist

linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API

linkerd-api
-----------
√ control plane pods are ready
√ control plane self-check
√ [kubernetes] control plane can talk to Kubernetes
√ [prometheus] control plane can talk to Prometheus
√ no invalid service profiles

linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date

control-plane-version
---------------------
√ control plane is up-to-date
√ control plane and cli versions match

Status check results are √

Environment

  • Kubernetes Version: 1.14.6
  • Cluster Environment: AKS
  • Host OS:
  • Linkerd version: stable-2.5.0

Possible solution

Implement PROXY-protocol within the linkerd proxy?

Additional context

About this issue

  • Original URL
  • State: open
  • Created 5 years ago
  • Reactions: 4
  • Comments: 17 (12 by maintainers)

Most upvoted comments

@halcyondude I also had this problem and the workaround as stated in #3334 is to add the following in the Nginx deployment spec:

spec:
  template:
    metadata:
      annotations:
        config.linkerd.io/skip-inbound-ports: 80,443 # the workaround
        linkerd.io/inject: enabled

Have same situation with 127.0.0.1 in remote_addr. But we don’t terminate TLS on LB. It is terminated on nginx-ingress.