ingress-nginx: DO Proxy Protocol broken header
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
NGINX Ingress controller version: 0.24.0
Kubernetes version (use kubectl version
):
Client Version: version.Info{Major:“1”, Minor:“13”, GitVersion:“v1.13.3”, GitCommit:“721bfa751924da8d1680787490c54b9179b1fed0”, GitTreeState:“clean”, BuildDate:“2019-02-04T04:48:03Z”, GoVersion:“go1.11.5”, Compiler:“gc”, Platform:“darwin/amd64”}
Server Version: version.Info{Major:“1”, Minor:“13”, GitVersion:“v1.13.1”, GitCommit:“eec55b9ba98609a46fee712359c7b5b365bdd920”, GitTreeState:“clean”, BuildDate:“2018-12-13T10:31:33Z”, GoVersion:“go1.11.2”, Compiler:“gc”, Platform:“linux/amd64”}
Environment:
- Cloud provider or hardware configuration: Digital Ocean
What happened: Digital Ocean now allows for use of Proxy Protocol https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/#proxy-protocol So I’ve added the annotation to my service
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
and updated my config as follows
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
use-proxy-protocol: "true"
enable-brotli: "true"
enable-vts-status: "true"
However once I’ve applied these changes I get lots of errors such as the following
6���Zك7�g̮[“\�/�+�0�,��’g�(k��̨̩̪������������������$j�#@� �98” while reading PROXY protocol, client: 10.244.35.0, server: 0.0.0.0:443 2019/04/11 13:02:57 [error] 265#265: *4443 broken header: “����p�����ўL��k+ rbO- /�Ç���y�8\�/�+�0�,��’g�(k��̨̩̪������������������$j�#@� �98” while reading PROXY protocol, client: 10.244.41.0, server: 0.0.0.0:443 2019/04/11 13:02:57 [error] 265#265: 4444 broken header: "���5�Kk��4 ��b�pxLJw�]��G�V��� � \�/�+�0�,��’g�(k��̨̩̪������������������$j�#@� �98" while reading PROXY protocol, client: 10.244.41.0, server: 0.0.0.0:443
Digital Ocean’s response to these error was
This type of response is typically caused by Nginx not properly accepting the PROXY protocol. You should be able to simply append
proxy_protocol
to your listen directive in your server definition. More information on this can be seen in Nginx’s documentation available here:https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/
I can’t see what I’m missing
What you expected to happen: No errors
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 22
- Comments: 42 (4 by maintainers)
Hi, A workaround is to add the
service.beta.kubernetes.io/do-loadbalancer-hostname
annotation to the service with a hostname pointing to the loadbalancer IP.See https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md#servicebetakubernetesiodo-loadbalancer-hostname
It works for me using the helm chart with the following values:
Can confirm this issue, same configuration here across the board.
@tlaverdure that’s because you are no specifying the flag
--haproxy-protocol
in the curl command. If you enable proxy protocol in the ingress controller you need to decode it.I’m a bit unclear whether this issue is the same one we just encountered, but perhaps the following is helpful:
We have a microservice architecture, where services talk to each other via the axios library, all inside the same cluster. What we’d misconfigured was the URL by which the services talk to each other. We had one service talk to the other via the external dns record by which the target service was known e.g foo.domain.com, causing traffic for it to go all the way out and back into the cluster again. When nginx tried to handle the request, the header looked broken because the request wasn’t preceded by the instruction
PROXY TCP4 10.8.0.18 [TARGET_IP] 55966 443
(which is what happens when youcurl --haproxy-protocol
, and is what happens to all inbound traffic handled by the ingress controller when you enable the “Proxy Protocol” setting)By changing the URL of the target service to the internal dns record e.g
http://[container].[namespace].svc.cluster.local
by which it was known, traffic was sent directly to the target, not back through the ingress controller.This is precisely the problem we are facing and the workaround was to add the hostname as described. I can confirm that this also works with wildcard DNS matches.
Thank you
service.beta.kubernetes.io/do-loadbalancer-hostname
seems to work. No more broken headers using proxy protocoladding
service.beta.kubernetes.io/do-loadbalancer-hostname
works for DO. But it’s not clear what to do at the first time when reading DO docs (from here and here)Basically it contains the following steps:
https://kubernetes.github.io/ingress-nginx/deploy/#digital-ocean
), DON’T apply directly from remote URL.kubectl get svc --namespace=ingress-nginx
). Then go to your domain provider and point your domain to its External IP, after that wait for few mins for changes to get updated and try access the domain, make sure it reaches to Ingress controller (usually show ingress status page, 404… something)ingress-nginx-controller
(it has a commentSource: ...controller-service.yaml
), and add 1 more annotation with value:After this you can create your Ingress resource and use Cert Manager to issue SSL certificate as usual. Hope this help
The same issue happens in Kubernetes cluster on top of OpenStack cloud (using openstack-cloud-controller-manager).
The Ingress service could be accessed from outside the cluster, but not from the cluster node or in the pod.
Hello,
I am running EKS with kubernetes nginx ingress on NLB with SSL termination but the proxy protocol annotations was not working.
But once the NLB and ingress objects are created, the services wont be reachable because the proxy protocol on the NLB target groups was not turned on automatically even with the provided annotations. I got the below error in the controller pod logs,
Once I turned them on manually through console (NLB -> Listeners -> Target Group attributes), it works (I get 200 response in the same controller logs).
But both the annotations in the helm values does not seem to work.
Can anyone help on this ?
I ran into this and at one point, I deleted the ingress service and recreated it and it worked. I got the broken headers issue when I had the nginx configmap set, but not the annotations on the ingress service that created the DO LB. Manually configuring “Proxy Protocol” on the LB w/ the Web UI didn’t work for me.
Anyways, here is a config that worked for me:
mandatory.yaml
cloud-generic.yaml
ingress.yaml (I wanted a whitelist for only CloudFlare)
Stupid check, but if you go to the load balancer’s settings in the DO admin panel, it’s enabled, right? Because if it’s not while use-proxy-protocol = true you end up with the same kind of encoding mismatch.
@w1ndy no. You need a custom template for that https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/