kubernetes-ingress: 413 Request Entity Too Large
We’ve deployed a docker registry and created an ingress rule to it’s kubernetes service whilst using the nginx ingress controller. When pushing larger images we quickly hit the nginx limits giving us the error below.
Error parsing HTTP response: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx/1.9.14</center>\r\n</body>\r\n</html>\r\n"
I’ve forked the repo and hacked the nginx config adding a client_max_body_size attribute so we can push larger images. For a proper solution though, it might be nice to set a value in the kubernetes ingress rule and have that used when the nginx controller is updated?
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 3
- Comments: 24 (5 by maintainers)
FYI, the annotation has changed and is now:
Also, I had to restart the nginx pod for the effect to take place. It immediately started working after that.
Setting it to “0” makes the nginx post size unrestricted:
I came here to configure the Docker registry helm chart and found:
I added it to the Ingress for the registry and it did the trick. 👌
In case it helps, 413 solved with the
ingress.kubernetes.io/proxy-body-size:Running
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.17@pleshakov sorry guys, too many
kubernetes ingressesover internet…Hi,
I know this is a very old post, we have the same issue and I’ve updated nginx.ingress.kubernetes.io/proxy-body-size: 50m in the file, now I have to restart Nginx POD to changes to effect, can you please let me know how to restart K8 Nginix POD …?
I know this way, we can restart PODs,
kubectl scale deployment name–replicas=0 -n service kubectl scale deployment name --replicas=1 -n service
any other way i can restart Nginx PODs to reload the configuration.
For
apiVersion: extensions/v1beta1nginx.ingress.kubernetes.io/proxy-body-size: 50mannotation didn’t solve 413 error for me. When I’ve added one more annotationnginx.org/client-max-body-size: "50m"it did the jobThis already works with ConfigMaps such as:
A value of “0” lifts the restriction on client_max_body_size. Make sure to reference the ConfigMap in your ingress controller like so:
For more information on the available config parameters have a look at here.
I have tried both proxy-body-size and client-max-body-size on the configmap and did a rolling restart of the nginx controller pods and when I grep the nginx.conf file in the pod it returns the default 1m. I am trying to do this within Azure Kubernetes Service (AKS). I’m working with someone from their support. They said its not on them since it appears to be a nginx config issue.
The weird hting is we had other clusters in Azure that this wasnt an issue on until we discovered this with some of the newer deployments. The initial fix they came up with is what is in this thread but it just refuses to change.
Below is my configmap:
After issuing a rolling restart: kubectl rollout restart deployment/nginx-nginx-ingress-controller -n ingress-nginx
Grepping the nginx ingress controller pod to query the value now reveals:
Doesnt matter where I try to change it. On the configmap for global or the Ingress route specifically…this value above never changes.
We plan to add the ability to configure the global NGINX parameters through a ConfigMap.
To allow to customize the configuration per Ingress resource, we can leverage annotations. As an example,
Will that work for your case?