istio: Internal Kubernetes API Calls Blocked by Istio
Describe the bug
I’m installing a monitoring service in to my pod which is trying to make a call to the Kubernetes API server. This request is being blocked by the Istio sidecar. If I disable the istio-injection
and redeploy everything works as planned. Do I need to enable anything to make this work?
Expected behavior My pods can access the internal Kubernetes API
Steps to reproduce the bug
curl https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT/api/v1/namespaces/default/pods
from inside my pod does not respond.
Version Istio:
Version: 1.0.2
GitRevision: d639408fded355fb906ef2a1f9e8ffddc24c3d64
User: root@
Hub: gcr.io/istio-release
GolangVersion: go1.10.1
BuildStatus: Clean
Kubernetes:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Installation
helm install install/kubernetes/helm/istio \
--name istio \
--namespace istio-system \
--set certmanager.enabled=true
Environment Microsoft Azure AKS
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 3
- Comments: 26 (4 by maintainers)
Commits related to this issue
- fix: Don't let istio to instrument the webhook Istio and the api-sever does not play pretty well together, like pizza and pineapple: * https://github.com/istio/istio/issues/8696 * https://github.com... — committed to sysdiglabs/charts by deleted user 3 years ago
- [admission-controller] Don't let istio to instrument the webhook (#222) * fix: Don't let istio to instrument the webhook Istio and the api-sever does not play pretty well together, like pizza and... — committed to sysdiglabs/charts by deleted user 3 years ago
ok so I have another solution : try this master sidecar config: basically this will be injected to every istio managed POD . it will allow the pod to connect only to other PODs in the same namespace as well as connect to istio-system namespace and K8s API service
credit not to me but to @WilliamNewshutz & @GregoryHanson
We have this problem - we made a ServiceEntry and VirtualService to account for the fact that the apiserver is now accessed over a public URL. I made the following ServiceEntry and VirtualService:
This gets it to the point where I can access the api-server, but after 5 minutes, it stops working, and calls to the api-server hang. Also, calls to Go’s net.LookupIP(host) hang during this period, when providing the FQDN of the AKS apiserver.
I found that if I wait 10-15 minutes, the problem seems to resolve itself, but starts failing again after another 5 minutes. I also found that I can make a request to the apiserver when it’s working, and that seems to delay the point where it stops. I made a request, then another 1 minute later - and it started failing 5 minutes after the 2nd request, not the first.
I should mention that curl requests made directly to the API server when I
kubectl exec
into the pod DO succeed. That made me think that maybe it’s client-go that I was using incorrectly, but the fact that net.LookupIP(host) would also hang made me decide that probably isn’t the case…We just switched from Contour to Istio (1.9.4) on our dev environments and are running into this issue alot
We’ve modified our IstioOperator settings with the excludeIPRanges mentioned in this issue, but we’re still seeing the issue.
It tends to happen regularly when our nightly builds get deployed to our dev clusters (~2AM PT). After enough restarts, the problem seems to go away, but we’ve yet to find the surefire workaround/remedy.
Any other things we should be looking at?
We managed to get around this issue with the following DestinationRul in our services’ namespace:
how about (lets assume kubernetes.default.svc.cluster.local = 172.21.0.1 since most cloud providers uses static IP range for K8s internals)
or if its only one POD which is affected: simply inject a POD annotation:
but the thing that works best for me is: