kubernetes-ingress: 413 Request Entity Too Large

We’ve deployed a docker registry and created an ingress rule to it’s kubernetes service whilst using the nginx ingress controller. When pushing larger images we quickly hit the nginx limits giving us the error below.

Error parsing HTTP response: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx/1.9.14</center>\r\n</body>\r\n</html>\r\n"

I’ve forked the repo and hacked the nginx config adding a client_max_body_size attribute so we can push larger images. For a proper solution though, it might be nice to set a value in the kubernetes ingress rule and have that used when the nginx controller is updated?

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 3
  • Comments: 24 (5 by maintainers)

Commits related to this issue

Most upvoted comments

FYI, the annotation has changed and is now:

    nginx.ingress.kubernetes.io/proxy-body-size: 50m

Also, I had to restart the nginx pod for the effect to take place. It immediately started working after that.

Setting it to “0” makes the nginx post size unrestricted:

    nginx.ingress.kubernetes.io/proxy-body-size: "0"

I came here to configure the Docker registry helm chart and found:

    nginx.ingress.kubernetes.io/proxy-body-size: 50m

I added it to the Ingress for the registry and it did the trick. 👌

In case it helps, 413 solved with the ingress.kubernetes.io/proxy-body-size :

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: services
  annotations:
    kubernetes.io/ingress.class: "nginx"
    ingress.kubernetes.io/proxy-body-size: 50m

Running quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.17

@pleshakov sorry guys, too many kubernetes ingresses over internet…

FYI, the annotation has changed and is now:

    nginx.ingress.kubernetes.io/proxy-body-size: 50m

Also, I had to restart the Nginx pod for the effect to take place. It immediately started working after that.

Hi,

I know this is a very old post, we have the same issue and I’ve updated nginx.ingress.kubernetes.io/proxy-body-size: 50m in the file, now I have to restart Nginx POD to changes to effect, can you please let me know how to restart K8 Nginix POD …?

I know this way, we can restart PODs,

kubectl scale deployment name–replicas=0 -n service kubectl scale deployment name --replicas=1 -n service

any other way i can restart Nginx PODs to reload the configuration.

For apiVersion: extensions/v1beta1 nginx.ingress.kubernetes.io/proxy-body-size: 50m annotation didn’t solve 413 error for me. When I’ve added one more annotation nginx.org/client-max-body-size: "50m" it did the job

This already works with ConfigMaps such as:

apiVersion: v1
kind: ConfigMap
data:
  body-size: "0"
metadata:
  name: nginx-conf

A value of “0” lifts the restriction on client_max_body_size. Make sure to reference the ConfigMap in your ingress controller like so:

args:
        - /nginx-ingress-controller
        - --default-backend-service=default/default-http-backend
        - --nginx-configmap=default/nginx-conf

For more information on the available config parameters have a look at here.

I have tried both proxy-body-size and client-max-body-size on the configmap and did a rolling restart of the nginx controller pods and when I grep the nginx.conf file in the pod it returns the default 1m. I am trying to do this within Azure Kubernetes Service (AKS). I’m working with someone from their support. They said its not on them since it appears to be a nginx config issue.

The weird hting is we had other clusters in Azure that this wasnt an issue on until we discovered this with some of the newer deployments. The initial fix they came up with is what is in this thread but it just refuses to change.

Below is my configmap:

apiVersion: v1
data:
  client-max-body-size: 0m
  proxy-connect-timeout: 10s
  proxy-read-timeout: 10s
kind: ConfigMap
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"nginx-nginx-ingress-controller-7b9bff87b8-vxv8q","leaseDurationSeconds":30,"acquireTime":"2020-03-10T20:52:06Z","renewTime":"2020-03-10T20:53:21Z","leaderTransitions":1}'
  creationTimestamp: "2020-03-10T18:34:01Z"
  name: ingress-controller-leader-nginx
  namespace: ingress-nginx
  resourceVersion: "23928"
  selfLink: /api/v1/namespaces/ingress-nginx/configmaps/ingress-controller-leader-nginx
  uid: b68a2143-62fd-11ea-ab45-d67902848a80

After issuing a rolling restart: kubectl rollout restart deployment/nginx-nginx-ingress-controller -n ingress-nginx

Grepping the nginx ingress controller pod to query the value now reveals:

kubectl exec -n ingress-nginx nginx-nginx-ingress-controller-7b9bff87b8-p4ppw cat nginx.conf | grep client_max_body_size
            client_max_body_size                    1m;
            client_max_body_size                    1m;
            client_max_body_size                    1m;
            client_max_body_size                    1m;
            client_max_body_size                    21m;

Doesnt matter where I try to change it. On the configmap for global or the Ingress route specifically…this value above never changes.

We plan to add the ability to configure the global NGINX parameters through a ConfigMap.

To allow to customize the configuration per Ingress resource, we can leverage annotations. As an example,

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    nginx/client_max_body_size: 1m
spec:
  rules:
  - host: cafe.example.com
    http:
      paths:
      - path: /tea
        backend:
          serviceName: tea-svc
          servicePort: 80
      - path: /coffee
        backend:
          serviceName: coffee-svc
          servicePort: 80

Will that work for your case?