istio: Add customized metric don't work!
Bug description
- I follow this instruction ‘Customizing Istio Metrics’ to add a new metric ‘request_id’.
- use command
kubectl edit envoyfilter -n istio-system stats-filter-1.7
modify value of ‘spec.configPatches[match.context:GATEWAY
][patch.value.typed_config.value.config.configuration]’ to {“debug”:true,“metrics”:[{“dimensions”:{"request_id":"request.headers['x-request-id']
"},“name”:“requests_total”}],“stat_prefix”:“istio”,“disable_host_header_fallback”: true} - Use command
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.7/samples/addons/prometheus.yaml
deploy promethues. - use command
istioctl dashboard prometheus --address=0.0.0.0&
to expose service. - open grafana page
- input metric
istio_requests_total
- click Execute Button don’t get any data related to ‘request_id’ tag.
[ ] Docs [ ] Installation [ ] Networking [ ] Performance and Scalability [x] Extensions and Telemetry [ ] Security [ ] Test and Release [ ] User Experience [ ] Developer Infrastructure
Expected behavior
Get some metric data contains tag request_id
.
When I try replace tag request_id
with default tag,such as request_host
,it worked!
Sample result as follows.
istio_requests_total{app=“istio-ingressgateway”,chart=“gateways”,connection_security_policy=“unknown”,destination_app=“productpage”,destination_canonical_revision=“v1”,destination_canonical_service=“productpage”,destination_principal=“spiffe://cluster.local/ns/book-info/sa/bookinfo-productpage”,destination_service=“productpage.book-info.svc.cluster.local”,destination_service_name=“productpage”,destination_service_namespace=“book-info”,destination_version=“v1”,destination_workload=“productpage-v1”,destination_workload_namespace=“book-info”,heritage=“Tiller”,instance=“10.244.0.91:15090”,istio=“ingressgateway”,job=“kubernetes-pods”,kubernetes_namespace=“istio-system”,kubernetes_pod_name=“istio-ingressgateway-77bcdbf694-9lsg5”,pod_template_hash=“77bcdbf694”,release=“istio”,reporter=“source”,request_host="37a0a958-1d68-4550-8695-67fe5ee888dd"
,request_protocol=“http”,response_code=“200”,response_flags=“-”,service_istio_io_canonical_name=“istio-ingressgateway”,service_istio_io_canonical_revision=“latest”,source_app=“istio-ingressgateway”,source_canonical_revision=“latest”,source_canonical_service=“istio-ingressgateway”,source_principal=“spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account”,source_version=“unknown”,source_workload=“istio-ingressgateway”,source_workload_namespace=“istio-system”}
Version (include the output of istioctl version --remote
and kubectl version --short
and helm version
if you used Helm)
1.7.3
How was Istio installed?
istioctl install -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
values:
global:
proxy:
privileged: true
tracer:
zipkin:
address: zipkin:9411
EOF
Environment where bug was observed (cloud vendor, OS, etc)
[root@k18-2 istio]# uname -r
5.2.4-1.el7.elrepo.x86_64
[root@k18-2 istio]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@k18-2 istio]# kubectl version --short
Client Version: v1.18.6
Server Version: v1.18.6
[root@k18-2 istio]# istioctl version
client version: 1.7.3
control plane version: 1.7.3
data plane version: 1.7.3 (13 proxies)
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 22 (13 by maintainers)
As a workaround, please use podAnnotations on ingress deployment during installation https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#KubernetesResourcesSpec.
The issue is that it that the annotation needs to be copied into ISTIO_METAJSON_ANNOTATIONS environment variable. This is done automatically by injector for sidecars but not for gateways. Adding this to ingress pod next to other ISTIO_META fixes the issue:
🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2020-11-16. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.
Created by the issue and PR lifecycle manager.