ingress-nginx: ingress controller becomes stale after being unable to decode events from watch stream
NGINX Ingress controller version: 0.17.0
Kubernetes version (use kubectl version
): v1.10.3
Environment:
What happened: After a configmap unrelated to ingress-nginx (in a different namespace too) become too big to be updated, ingress-nginx failed to decode events from the watch stream repeatedly:
E0717 15:07:16.057272 8 streamwatcher.go:109] Unable to decode an event from the watch stream: object to decode was longer than maximum allowed size
W0717 15:07:16.057343 8 reflector.go:341] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:152: watch of *v1.ConfigMap ended with: very short watch: k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:152: Unexpected watch close - watch lasted less than a second and no items received
After some time it appears to gave up (although without logging an final error from what I can tell) without failing it’s health check or exiting, causing the nginx config to become stale.
What you expected to happen: Ideally it would run into the decode error. That underlying problem gets tracked in https://github.com/kubernetes/kubernetes/issues/57073 already (unless fejta decides to close the issue eventually…) but even if ingress-nginx runs into a fatal issue, it should fail it’s health check or shutdown to make it easier to isolate and alert on the issue.
How to reproduce it (as minimally and precisely as possible):
It should be reproducable by adding lines to a configmap until the update fails with etcdserver: request is too large
.
Anything else we need to know:
I assume the etcdserver: request is too large
error is expected and it’s not expected for k8s to validate the request first. If not, that might be another k8s issue.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 30 (10 by maintainers)
That or any other solution that avoids the creation of invalid objects
Only for the configmaps.
That’s right.