ingress-nginx: EKS Nginx Proxy Protocol Broken Header

What happened:

Nginx Ingress controller gives a broken header error for the 443 port when the Proxy Protocol v2 is enabled with the AWS Load Balancer Controller. The services are accessible via ingress but have the following error log from the Nginx Ingress Controller. The proxy protocol is enabled from both Nginx config and AWS NLB

2023/02/17 09:23:36 [error] 384#384: *1781139 broken header: "" while reading PROXY protocol, client: <NLB-ZONE-1-IP>, server: 0.0.0.0:443
2023/02/17 09:23:36 [error] 384#384: *1781141 broken header: "" while reading PROXY protocol, client: <NLB-ZONE-2-IP>, server: 0.0.0.0:443
2023/02/17 09:23:36 [error] 384#384: *1781147 broken header: "" while reading PROXY protocol, client: <NLB-ZONE-3-IP>, server: 0.0.0.0:443

Originally we want to pass the client ip to the Nginx by enabling proxy protocol 2. But since the annotation service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" is not working we achieved that using service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true and turning off the proxy protocol from both NLB and Nginx Ingress controller. But we want to make sure that enabling proxy protocol is possible since if there is a requirement to pass any additional headers in a future requirement.

So once you enable the proxy protocol from both AWS NLB and Nginx Ingress side, the above broker header error log starts to appear. The configuration we used are stated below

What you expected to happen: Nginx Ingress controller should not give any error outputs

After enabling the proxy protocol 2 with the Nginx Ingress controller annotations the error starts

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.3.1
  Build:         92534fa2ae799b502882c8684db13a25cde68155
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.10

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:51:43Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-eks-ffeb93d", GitCommit:"96e7d52c98a32f2b296ca7f19dc9346cf79915ba", GitTreeState:"clean", BuildDate:"2022-11-29T18:43:31Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS EKS v1.23.7
  • Basic cluster related info:
  Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:51:43Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-eks-ffeb93d", GitCommit:"96e7d52c98a32f2b296ca7f19dc9346cf79915ba", GitTreeState:"clean", BuildDate:"2022-11-29T18:43:31Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
ip-[IP-Removed].ec2.internal   Ready    <none>   9d    v1.23.7   [IP-Removed]   <none>        Ubuntu 20.04.5 LTS   5.15.0-1020-aws   containerd://1.5.9
ip-[IP-Removed].ec2.internal   Ready    <none>   9d    v1.23.7   [IP-Removed]   <none>        Ubuntu 20.04.5 LTS   5.15.0-1020-aws   containerd://1.5.9
ip-[IP-Removed].ec2.internal   Ready    <none>   9d    v1.23.7   [IP-Removed]   <none>        Ubuntu 20.04.5 LTS   5.15.0-1020-aws   containerd://1.5.9
controller:
  config:
      use-proxy-protocol: "true"
      use-forwarded-headers: "true"
      compute-full-forwarded-for: "false"
      enable-real-ip: "true"
service:
  enableHttp: true
      enableHttps: true
      loadBalancerSourceRanges:
        - "10.0.0.0/8"
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: cluster-name=<cluster-name>
        service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <cert-arn>
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
        service.beta.kubernetes.io/aws-load-balancer-scheme: internal
        service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
        service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
        service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
        service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true
        service.beta.kubernetes.io/aws-load-balancer-type: external
        service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"

Anything else we need to know:

There is a similar issue on the Digital Ocean cloud provider under : https://github.com/kubernetes/ingress-nginx/issues/3996

Is there something I’m missing here? Up to now there is no service impact but the Nginx Ingress controller keep outputting the error log on the broken header. The client ip shown here is from the NLB IPs from the 3 availability zones. The AWS LoadBalancer has correctly provisioned the NLB according to the annotations

2023/02/17 09:23:36 [error] 384#384: *1781139 broken header: "" while reading PROXY protocol, client: <NLB-ZONE-1-IP>, server: 0.0.0.0:443
2023/02/17 09:23:36 [error] 384#384: *1781141 broken header: "" while reading PROXY protocol, client: <NLB-ZONE-2-IP>, server: 0.0.0.0:443
2023/02/17 09:23:36 [error] 384#384: *1781147 broken header: "" while reading PROXY protocol, client: <NLB-ZONE-3-IP>, server: 0.0.0.0:443

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Reactions: 19
  • Comments: 32 (7 by maintainers)

Most upvoted comments

For the uninitiated, it would help if there was a clear detailed elaboration of what problem is solved by enabling proxy-protocol v2.

@dinukarajapaksha Please click the new issue button and look at the questions asked in the issue template. For readers and contributors benefit, edit your desciption and add the answers to those questions.

@longwuyuan I have edited the comment as well according to the following

Originally we want to pass the client ip to the Nginx by enabling proxy protocol 2. But since the annotation service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" is not working we achieved that using service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true and turning off the proxy protocol from both NLB and Nginx Ingress controller. But we want to make sure that enabling proxy protocol is possible since if there is a requirement to pass any additional headers in a future requirement

We’ve never been able to get this working either with the combination of use-proxy-protocol: "true" and service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" – not entirely sure if it ever worked in the first place.

I was able to get rid of these “bad” logs by setting annotations for healthcheck:

controller:

  config:
    use-proxy-protocol: "true"
    use-forwarded-headers: "true"
    enable-real-ip: "true"

  service:
    enabled: true

    annotations:
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
      service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http
      service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthz
      service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: 80
      ...

Helm Chart: ingress-nginx-4.8.0 App Version: 1.9.0

We had the same issue on EKS 1.24 with ingress-nginx deployed with the official Helm chart with the below details:


NGINX Ingress controller Helm Chart: ingress-nginx-4.7.1 App Version: 1.8.1 Tag: https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.8.1 Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.21.6


we solved the issue in this way:

deploy the helm chart with those values:

 config:
    entries:
      use-proxy-protocol: "true"

  service:
    type: LoadBalancer
    externalTrafficPolicy: "Local"
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
      service.beta.kubernetes.io/aws-load-balancer-type: nlb      

Most important externalTrafficPolicy should be “Local” to preserve source IP on providers supporting it. AWS does in such case. Reference

We had to disable the port 443 health check by adding service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "80" to NLB