istio: Pods injected twice will fail to startup
Bug description
When restoring a backup of an istio injected namespace the istio-proxy container on the pods goes into CrashLoopBackOff because of the error:
error invalid prometheus scrape configuration: application port is the same as agent port, which may lead to a recursive loop. Ensure pod does not have prometheus.io/port=15020 label, or that injection is not happening multiple times
[ ] Docs [ ] Installation [ ] Networking [ ] Performance and Scalability [ ] Extensions and Telemetry [ ] Security [ ] Test and Release [ ] User Experience [ ] Developer Infrastructure
Expected behavior Pods recreation without crashes in istio-proxy
Steps to reproduce the bug Install velero with restic support Take a velero backup of a namespace with istio injection enabled Delete the namespace Make a restore of the namespace
Version (include the output of istioctl version --remote
and kubectl version --short
and helm version
if you used Helm)
client version: 1.7.0
control plane version: 1.7.0
data plane version: 1.7.0 (35 proxies)
Client Version: v1.18.6+k3s1 Server Version: v1.18.6+k3s1
How was Istio installed? istioctl install --set profile=default
Environment where bug was observed (cloud vendor, OS, etc) K3OS cluster
The error says “Ensure pod does not have prometheus.io/port=15020 label”. The pods does have that label as all other pods that are not crashing. It also says to check if injection is not happening multiple times. But the namespace description is the same as all other namespaces with istio-injection.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 1
- Comments: 17 (7 by maintainers)
Running into this as well using Velero for pod backup and restores. We cannot exclude the pods from the restore because they have volumes that need to be restored.
Although restoring with double injection (from the istio webhook and Velero) is a misconfiguration that #25931 would allow, Velero does not currently have functionality to exclude the istio injected resources in its backups. I believe that allowing injection idempotency and printing a warning message is still a better behavior than failing outright.
Can you post the pod spec that is triggering this?
Are you sure you are not doing
istioctl kube-inject
into an auto injected namespace?Can you run
kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io
?