pomerium: proxy returns `stream terminated by RST_STREAM with error code: PROTOCOL_ERROR` while trying to authorize

What happened?

500 - Internal Server Error when trying to access a proxied service

What did you expect to happen?

Forwarding

How’d it happen?

  1. Go to proxied domain
  2. Log in with Google
  3. 500

What’s your environment like?

  • Pomerium version: 0.5.1

  • Environment: Kubernetes Manifests:

apiVersion: v1
kind: Service
metadata:
  name: pomerium
spec:
  ports:
    - port: 80
      name: http
      targetPort: 443
    - name: metrics
      port: 9090
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pomerium
spec:
  replicas: 1
  template:
    spec:
      containers:
      - name: pomerium
        image: pomerium
        args:
        - --config=/etc/pomerium/config.yaml
        env:
        - {name: INSECURE_SERVER, value: "true"}
        - {name: POMERIUM_DEBUG, value: "true"}
        - {name: AUTHENTICATE_SERVICE_URL, value: https://pomerium-authn.$(DOMAIN)}
        - {name: FORWARD_AUTH_URL, value: https://pomerium-fwd.$(DOMAIN)}
        - {name: IDP_PROVIDER, value: google}
        - name: COOKIE_SECRET
          valueFrom:
            secretKeyRef:
              name: pomerium
              key: cookie-secret
        - name: SHARED_SECRET
          valueFrom:
            secretKeyRef:
              name: pomerium
              key: shared-secret
        - name: IDP_CLIENT_ID
          valueFrom:
            secretKeyRef:
              name: pomerium
              key: idp-client-id
        - name: IDP_CLIENT_SECRET
          valueFrom:
            secretKeyRef:
              name: pomerium
              key: idp-client-secret
        - name: IDP_SERVICE_ACCOUNT
          valueFrom:
            secretKeyRef:
              name: pomerium
              key: idp-service-account
        ports:
          - containerPort: 443
            name: http
          - containerPort: 9090
            name: metrics
        livenessProbe:
          httpGet:
            path: /ping
            port: 443
        readinessProbe:
          httpGet:
            path: /ping
            port: 443
        volumeMounts:
        - mountPath: /etc/pomerium/
          name: config
      volumes:
      - name: config
        configMap:
          name: pomerium
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: pomerium
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
  - hosts:
    - pomerium-authn.$(DOMAIN)
    - pomerium-fwd.$(DOMAIN)
    secretName: pomerium-tls
  rules:
    - host: pomerium-authn.$(DOMAIN)
      http:
        paths:
          - path: /
            backend:
              serviceName: pomerium
              servicePort: 80
    - host: pomerium-fwd.$(DOMAIN)
      http:
        paths:
          - path: /
            backend:
              serviceName: pomerium
              servicePort: 80

What’s your config.yaml?

policy: 
  - from: https://test-nginx.cs-eng-apps-europe-west3.gcp.infra.csol.cloud
    to: http://httpbin.pomerium/
    allowed_domains:
      - container-solutions.com

What did you see in the logs?

11:23AM ERR http-error error="rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR" X-Forwarded-For=["10.156.0.9"] X-Forwarded-Host=["test-nginx.cs-eng-ops-europe-west3.gcp.infra.csol.cloud"] X-Forwarded-Port=["443"] X-Forwarded-Proto=["https"] X-Real-Ip=["10.156.0.9"] http-code=500 http-message="rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR" ip=10.40.0.6 req_id=d7e56771-0e2c-a7de-4518-25e7267da9ed user_agent="Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Firefox/68.0"
11:23AM DBG proxy: AuthorizeSession error="rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR" X-Forwarded-For=["10.156.0.9"] X-Forwarded-Host=["test-nginx.cs-eng-ops-europe-west3.gcp.infra.csol.cloud"] X-Forwarded-Port=["443"] X-Forwarded-Proto=["https"] X-Real-Ip=["10.156.0.9"] ip=10.40.0.6 req_id=d7e56771-0e2c-a7de-4518-25e7267da9ed user_agent="Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Firefox/68.0"
11:23AM DBG http-request X-Forwarded-For=["10.156.0.9"] X-Forwarded-Host=["test-nginx.cs-eng-ops-europe-west3.gcp.infra.csol.cloud"] X-Forwarded-Port=["443"] X-Forwarded-Proto=["https"] X-Real-Ip=["10.156.0.9"] duration=5.869461 email=email@container-solutions.com group=<groups> host=test-nginx.cs-eng-ops-europe-west3.gcp.infra.csol.cloud ip=10.40.0.6 method=GET path=/ req_id=d7e56771-0e2c-a7de-4518-25e7267da9ed service=all size=8150 status=500 user_agent="Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Firefox/68.0"

Additional context

I get this error both with all-in-one mode and by deploying the services separately. I have been messing with GRPC ports etc. before opening this issue but I could not find what the problem is. It seems like the proxy wants to talk to authorize over gRPCs but that is not available in INSECURE_SERVER mode

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 1
  • Comments: 33 (17 by maintainers)

Most upvoted comments

In my k8s I have configured the service to expose GRPC and the HTTP port.

It should work via loopback as well. If you’re going through a service, you might as well use a Deployment dedicated to the authorize service like we have in the helm chart. I’m not clear on the use case you have but our recommended configuration is mirrored in the helm chart.

It appears something specific to GKE networking and/or Istio is putting another (probably) gRPC service that is secure on 5443 into your pod’s network namespace. Istio/envoy certainly could be doing this.

I recall 5443 being a pseudo-default in the gRPC world, but I could be mistaken. I bet if you set grpc_secure: true with the default port set, you’d get an error indicating the Authorize service wasn’t on that endpoint or it wasn’t speaking http2.

For an additional bullet point, I just deployed test manifests to a non-Istio GKE cluster using vpc native and pomerium is functioning. @chauffer if you aren’t running Istio, you must have something else going on that has the same result.

Anyway, I’m fairly certain this is a deployment issue. Going to close it.