istio: Istio not working with headless service
Hello,
We are using istio with file istio-demo.yaml. We enabled auth on our service using istio auth policy.
service.yaml:
kind: Service
apiVersion: v1
metadata:
name: service-testing
namespace: ns-testing
spec:
selector:
app: env-t1
ports:
- name: ht
protocol: TCP
port: 5000
targetPort: 5000
clusterIP: None
deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-testing
namespace: ns-testing
spec:
replicas: 1
template:
metadata:
name: frontend-test
labels:
app: env-t1
version: v1
istio: ingressgateway
spec:
containers:
- name: server-testing
image: xyz/final_servere
Issue is, when we tried to access this service using curl from the pod it self using server-testing container it gives the following error:
root@deployment-testing-7cdc8db575-fc2lx:/app# curl http://service-testing.ns-testing:5000/
curl: (56) Recv failure: Connection reset by peer
But, when we tried to access this service using curl from the pod it self using istio-proxy container it works:
istio-proxy@deployment-testing-7cdc8db575-fc2lx:/$ curl https://service-testing.ns-testing:5000/ --key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem -k
Hello World - 200 OK
We have unique port names across all the services as per this link #5992.
After removing ClusterIP : None will work fine.
But we have to use headless services in some of our deployments.
Version Istio - 0.8 Kubernetes - 10.3 etcd - v3
What are we missing here?
About this issue
- Original URL
- State: open
- Created 6 years ago
- Reactions: 10
- Comments: 29 (3 by maintainers)
@mbanikazemi just wondering what version of Istio supports headless services? When did a fix go in?
here is a key quote from #7558
So I have a similar issue, I am running cassandra in k8s in a statefulset with a headless service. What this means is pods will access the cassadra service directly via the pod IP, which istio doesn’t know anything about and blocks the connection. If I remove cassandra from the mesh and create a
MESH_EXTERNAL
serviceentry with the pods ip address (you have to for tcp), it seems to work. BUT, one would never know the pods ip address in advance, and these istio objects go in with all the other helm charts.You know what does know the pods ip address… DNS. But you are forced to used a static IP. 😞 An easy fix would be just to allow hostnames for tcp services in serviceentrys. Resolve the destination IP and listen for that address for tcp connections on said ports. There must be a good reason you can’t do this, but I am not sure.
OR, allow adding statefulsets to the service registry, manually or automatically. Adding all pod ips doesn’t scale, of course, but statefulsets are special. Maybe they should be treated like k8s native services.
The workaround is, predetermining your pod/service ip cidrs and using those values in an ip exclusion or tcp serviceentry.
@dstockton Ran through my testing to verify the changes I had made to Istio last week. Without mutual TLS, a ServiceEntry allows the RMQ nodes to cluster. With mutual TLS, just a Policy and DestinationRule entries are required.
Headless Service
ServiceEntry
For mutual TLS, I created a Policy and a DestinationRule for each node
Policy
DestinationRule (1 of 3)
Clearly scaling this is not automatic, though using something like Ansible to add settings for nodes before scaling the replicas would work.
What version of Istio supports headless services?
headless services work Istio as expected. I am going to close this issue.
our solution for now was moving the headless service out of the mesh and adding the port ignore annotations to the services that need to speak to headless svc. This way we can non-intrusively remove one service from the mesh instead of changing to much for it.
annotation: traffic.sidecar.istio.io/excludeOutboundPorts: "<port>"