istio: [pilot 1.16.0] envoyExtAuthzHttp doesn't forward `x-envoy-peer-metadata` headers
Bug Description
TL;DR:
envoyExtAuthzHttpsubrequests are consistently showingsource_app=unknown- to remedy this, we started forwarding
x-envoy-peer-metadatato the authorization service - after 1.16.0, we do not see these
x-envoy-peer-metadataforwarded any longer, when enabling debug envoy logs
Hello there,
We’ve been running pilot on 1.15.3, and all of our applications have the istio-proxy envoy sidecar injected ✅
We use prometheus / kiali to monitor, and for the most part - a metric like istio_requests_total correctly shows source_app and destination_app. ✅
HOWEVER, there is a subset of cases where source_app=unknown, reporter=destination ❌ :
- we currently use
envoyExtAuthzHttpto “externally” authorize each application request (the authorizer is an in-cluster auth service) - all applications + auth server have the istio-proxy – calls from ingress -> auth directly are correctly tagged in telemetry
- the ext_authz subreqests are showing “unknown”
source_appvalues - in the subrequests specifically, we’re seeing something like:
# example prometheus counter for the ext_authz request, specifically
istio_requests_total{destination_app="auth", source_app="unknown", source_principal="<service account of destination app>"}
To “remedy” these cases (for our observability + sanity), we configured our envoyExtAuthzHttp meshConfig settings to forward the peer-metadata headers – the intention was to tag these ext_authz subrequests to be from the original, upstream requester (normally our ingress gateway)
- name: "authz-http-redirect"
envoyExtAuthzHttp:
port: "<auth service port>"
service: "auth.default.svc.cluster.local"
includeRequestHeadersInCheck:
- "x-envoy-peer-metadata"
- "x-envoy-peer-metadata-id"
However, since updating pilot to 1.16.0, we’re noticing (via debug logs on the destination app + auth) that the x-envoy-peer-metadata.* headers are no longer being forwarded via envoyExtAuthzHttp, which breaks this “fix” around an unknown request hitting our authorization service
Is that expected - were we using these headers in an incorrect way pre 1.16? If so, was there a correct way to remedy this monitoring issue when using envoyExtAuthzHttp?
Version
(not using `istioctl`, we are using the `istiod` helm chart for discovery)
discovery: 1.16.0 (distroless)
sidecar proxies: 1.16.0 (distroless)
ingress gateway: 1.16.0 (distroless)
$ kubectl version --short
Client Version: v1.25.1
Kustomize Version: v4.5.7
Server Version: v1.24.6-gke.1500
Additional Information
No response
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 3
- Comments: 20 (10 by maintainers)
can you share the config_dump?