oauth2-proxy: Large authorization headers from GitLab raising 502s(?)
I have oauth2-proxy
set up as in External OAUTH Authentication in the Nginx Ingress Controller documentation. I added --set-authorization-header
to the proxy options, and am able to sign in with GitLab.
However, if I stay on the dashboard for about two minutes, I will inevitably see an error message pop up: https://imgur.com/I2jpDwh
Checking the logs on my Ingress controller pod shows some messages about headers being too large:
$ kubectl logs -n ingress-nginx ingress-nginx-controller-75f84dfcd7-7zxfd | grep error
2020/07/11 15:54:31 [error] 1731#1731: *510305 upstream sent too big header while reading response header from upstream, client: 10.42.1.1, server: _, request: "GET /oauth2/auth HTTP/1.1", upstream: "http://10.42.1.24:4180/oauth2/auth", host: "192.168.2.2", referrer: "https://192.168.2.2/"
2020/07/11 15:54:31 [error] 1729#1729: *505912 auth request unexpected status: 502 while sending to client, client: 10.42.1.1, server: _, request: "GET /api/v1/daemonset/%20?itemsPerPage=10&page=1&sortBy=d,creationTimestamp HTTP/2.0", host: "192.168.2.2", referrer: "https://192.168.2.2/"
Expected Behavior
I don’t expect GitLab tokens to be large enough to overrun a reasonable default buffer size…
Current Behavior
See logs
Possible Solution
I could use “pass-through mode”? I’ve cranked the proxy-buffer-size
on the dashboard’s Ingress to 128k and I’m still seeing the error.
Steps to Reproduce (for bugs)
- Set up Kubernetes Dashboard with Helm Chart (
ingress.enabled: true
), add annotations to Ingress as seen below - Set up
oauth2-proxy
as below for authentication - Set up RBAC as needed to allow a user to sign in with OIDC.
- Navigate to dashboard
- Wait
Your Environment
long YAML files follow, but they’re fairly standard I think
oauth2-proxy
manifests
kind: Deployment
apiVersion: apps/v1
metadata:
name: oauth2-proxy
namespace: kubernetes-dashboard
labels:
k8s-app: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- name: oauth2-proxy
image: 'quay.io/oauth2-proxy/oauth2-proxy:latest-arm64'
args:
- '--provider=gitlab'
- '--email-domain=*'
- '--upstream=file:///dev/null'
- '--http-address=0.0.0.0:4180'
- '--set-authorization-header=true'
ports:
- containerPort: 4180
protocol: TCP
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: [snip]
- name: OAUTH2_PROXY_CLIENT_SECRET
value: [snip]
- name: OAUTH2_PROXY_COOKIE_SECRET
value: [snip]
imagePullPolicy: Always
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
---
kind: Service
apiVersion: v1
metadata:
name: oauth2-proxy
namespace: kubernetes-dashboard
labels:
k8s-app: oauth2-proxy
spec:
ports:
- name: http
protocol: TCP
port: 4180
targetPort: 4180
selector:
k8s-app: oauth2-proxy
type: ClusterIP
sessionAffinity: None
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: oauth2-proxy
namespace: kubernetes-dashboard
spec:
rules:
- http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: http
dashboard Ingress manifest
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: dashboard-kubernetes-dashboard
namespace: kubernetes-dashboard
labels:
app.kubernetes.io/instance: dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.0.3
helm.sh/chart: kubernetes-dashboard-2.2.0
annotations:
meta.helm.sh/release-name: dashboard
meta.helm.sh/release-namespace: kubernetes-dashboard
nginx.ingress.kubernetes.io/auth-response-headers: Authorization
nginx.ingress.kubernetes.io/auth-signin: 'https://$host/oauth2/start?rd=$request_uri'
nginx.ingress.kubernetes.io/auth-url: 'https://$host/oauth2/auth'
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/proxy-buffer-size: 128k # how is this not enough?! We're using GitLab, not Azure!
service.alpha.kubernetes.io/app-protocols: '{"https":"HTTPS"}'
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: dashboard-kubernetes-dashboard
servicePort: 443
- Version used: 6.0.0About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 17 (7 by maintainers)
For others who end up here: The comment from @c-x-berger didn’t work for me.
What worked was to add
nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
; but on both Ingresses - the actual app (e.g. kubernetes-dashboard) and the oauth2-proxy ingress.Pass-through is not an option for me because I have multiple upstreams via subdomains.