istio: Istio not working with headless service

Hello,

We are using istio with file istio-demo.yaml. We enabled auth on our service using istio auth policy.

service.yaml:

kind: Service
apiVersion: v1
metadata:
  name: service-testing
  namespace: ns-testing
spec:
  selector:
    app: env-t1
  ports:
  - name: ht
    protocol: TCP
    port: 5000
    targetPort: 5000
  clusterIP: None

deployment.yaml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: deployment-testing
  namespace: ns-testing
spec:
  replicas: 1
  template:
    metadata:
      name: frontend-test
      labels:
        app: env-t1
        version: v1
        istio: ingressgateway
    spec:
      containers:
      - name: server-testing
        image: xyz/final_servere

Issue is, when we tried to access this service using curl from the pod it self using server-testing container it gives the following error:

root@deployment-testing-7cdc8db575-fc2lx:/app# curl http://service-testing.ns-testing:5000/
curl: (56) Recv failure: Connection reset by peer

But, when we tried to access this service using curl from the pod it self using istio-proxy container it works:

istio-proxy@deployment-testing-7cdc8db575-fc2lx:/$ curl https://service-testing.ns-testing:5000/ --key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem -k
Hello World - 200 OK

We have unique port names across all the services as per this link #5992.

After removing ClusterIP : None will work fine.

But we have to use headless services in some of our deployments.

Version Istio - 0.8 Kubernetes - 10.3 etcd - v3

What are we missing here?

About this issue

  • Original URL
  • State: open
  • Created 6 years ago
  • Reactions: 10
  • Comments: 29 (3 by maintainers)

Most upvoted comments

@mbanikazemi just wondering what version of Istio supports headless services? When did a fix go in?

here is a key quote from #7558

The problem with headless services is one of scale. Accessing pods directly instead of service VIPs means we need to construct Envoy listeners for all possible pod IPs, which is not a scalable option. So, we resort to using a special passthrough listener in Envoy listening on the headless service port (0.0.0.0:someport) that forwards traffic for these ports to the IP that the client-app resolved by itself (pod IP). This is as low as things go (linux listen() call with INADDR_ANY on some port).

So I have a similar issue, I am running cassandra in k8s in a statefulset with a headless service. What this means is pods will access the cassadra service directly via the pod IP, which istio doesn’t know anything about and blocks the connection. If I remove cassandra from the mesh and create a MESH_EXTERNAL serviceentry with the pods ip address (you have to for tcp), it seems to work. BUT, one would never know the pods ip address in advance, and these istio objects go in with all the other helm charts.

You know what does know the pods ip address… DNS. But you are forced to used a static IP. 😞 An easy fix would be just to allow hostnames for tcp services in serviceentrys. Resolve the destination IP and listen for that address for tcp connections on said ports. There must be a good reason you can’t do this, but I am not sure.

OR, allow adding statefulsets to the service registry, manually or automatically. Adding all pod ips doesn’t scale, of course, but statefulsets are special. Maybe they should be treated like k8s native services.

The workaround is, predetermining your pod/service ip cidrs and using those values in an ip exclusion or tcp serviceentry.

@dstockton Ran through my testing to verify the changes I had made to Istio last week. Without mutual TLS, a ServiceEntry allows the RMQ nodes to cluster. With mutual TLS, just a Policy and DestinationRule entries are required.

Headless Service

---
apiVersion: v1
kind: Service
metadata:
  name: rmq
  labels:
    app: rmq
    type: LoadBalancer
spec:
  type: ClusterIP
  clusterIP: None
  ports:
   - name: epmd
     protocol: TCP
     port: 4369
     targetPort: 4369
   - name: amqp
     protocol: TCP
     port: 5672
     targetPort: 5672
   - name: http-api
     protocol: TCP
     port: 15672
     targetPort: 15672
   - name: inter-node
     protocol: TCP
     port: 25672
     targetPort: 25672
  selector:
    app: rmq

ServiceEntry

---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: rmq
spec:
  hosts:
  - rmq-0.rabbitmq.default.svc.cluster.local
  - rmq-1.rabbitmq.default.svc.cluster.local
  - rmq-2.rabbitmq.default.svc.cluster.local
  ports:
  - number: 4369
    name: epmd
    protocol: TCP
  - number: 5672
    name: amqp
    protocol: TCP
  - number: 15672
    name: http
    protocol: HTTP
  - number: 25672
    name: inter-node
    protocol: TCP

For mutual TLS, I created a Policy and a DestinationRule for each node

Policy

---
apiVersion: "authentication.istio.io/v1alpha1"
kind: Policy
metadata:
  name: rabbitmq
spec:
  targets:
  - name: rmq-0
  - name: rmq-1
  - name: rmq-2
  peers:
  - mtls: {}

DestinationRule (1 of 3)

---
apiVersion: "networking.istio.io/v1alpha3"
kind: DestinationRule
metadata:
  name: rmq-0
spec:
  host: rmq-0.rabbitmq.default.svc.cluster.local
  trafficPolicy:
    tls:
      mode: DISABLE
    portLevelSettings:
    - port:
        number: 5672
      tls:
        mode: ISTIO_MUTUAL
    - port:
        number: 15672
      tls:
        mode: ISTIO_MUTUAL
    - port:
        number: 25672
      tls:
        mode: ISTIO_MUTUAL

Clearly scaling this is not automatic, though using something like Ansible to add settings for nodes before scaling the replicas would work.

headless services work Istio as expected. I am going to close this issue.

What version of Istio supports headless services?

headless services work Istio as expected. I am going to close this issue.

our solution for now was moving the headless service out of the mesh and adding the port ignore annotations to the services that need to speak to headless svc. This way we can non-intrusively remove one service from the mesh instead of changing to much for it. annotation: traffic.sidecar.istio.io/excludeOutboundPorts: "<port>"