istio: Unable to install istio on EKS with IPv6

Bug Description

Tried my best to install istio on EKS, and everytime istio-ingressgateway pod fails to mark as ready, with readiness probe failing with an error log mentioning Readiness probe failed: Get "http://[IPv6 endpoint of the pod]:15021/healthz/ready": dial tcp [same IPv6 endpoint]:15021: connect: connection refused. Istioctl keeps waiting for 5 minutes and fails to spin up istio. Is there any IPv6-specific configuration that I have to include to ensure this works?

Version

client version: 1.12.2
control plane version: 1.12.2
data plane version: 1.12.2 (1 proxies)

Client Version: v1.23.2
Server Version: v1.21.5-eks-bc4871b
WARNING: version difference between client (1.23) and server (1.21) exceeds the supported minor version skew of +/-1

Didnt use helm

Additional Information

Target cluster context: devops@CLUSTER_NAME.ap-south-1.eksctl.io

Running with the following config:

istio-namespace: istio-system full-secrets: false timeout (mins): 30 include: { } exclude: { Namespaces: kube-system,kube-public,kube-node-lease,local-path-storage } end-time: 2022-01-24 16:45:47.136144463 +0530 IST

Cluster endpoint: https://HIDDEN.gr7.ap-south-1.eks.amazonaws.com CLI version: version.BuildInfo{Version:“1.12.2”, GitRevision:“af0d66fd0aa363e9a7b0164f3a94ba36252fe60f”, GolangVersion:“go1.17.5”, BuildStatus:“Clean”, GitTag:“1.12.2”}

The following Istio control plane revisions/versions were found in the cluster: Revision default: &version.MeshInfo{ { Component: “pilot”, Info: version.BuildInfo{Version:“1.12.2”, GitRevision:“af0d66fd0aa363e9a7b0164f3a94ba36252fe60f-dirty”, GolangVersion:“”, BuildStatus:“Modified”, GitTag:“1.12.2”}, }, }

The following proxy revisions/versions were found in the cluster: Revision default: Versions {1.12.2}

Fetching proxy logs for the following containers:

istio-system/istio-ingressgateway/istio-ingressgateway-66cc5dcb7d-tgfwf/istio-proxy istio-system/istiod/istiod-577b69955f-nx62b/discovery

Fetching Istio control plane information from cluster.

Running istio analyze on all namespaces and report as below: Analysis Report: Info [IST0102] (Namespace cert-manager) The namespace is not enabled for Istio injection. Run ‘kubectl label namespace cert-manager istio-injection=enabled’ to enable it, or ‘kubectl label namespace cert-manager istio-injection=disabled’ to explicitly mark it as not needing injection. Info [IST0102] (Namespace default) The namespace is not enabled for Istio injection. Run ‘kubectl label namespace default istio-injection=enabled’ to enable it, or ‘kubectl label namespace default istio-injection=disabled’ to explicitly mark it as not needing injection. Creating an archive at /home/baalajimaestro/bug-report.tar.gz. Cleaning up temporary files in /tmp/bug-report. Done.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 15 (12 by maintainers)

Most upvoted comments

The custom build is required to get it working - essentially I cherry-picked the changes from #37164 on top of the 1.12.2 tag and built the Docker images using make docker.pilot docker.proxyv2 docker.operator docker.install-cni. I pushed those to a private container registry and changed the hub in the global settings of my Istio installation. If you run with CNI you will also need the changes from #37166.

To get IPv4 working I had to add the following config as well (Helm format):

global:
  proxy:
    excludeIPRanges: '0.0.0.0/0'

IPv6 in EKS with the aws-cni will add a host-local NAT for IPv4 traffic by default. This is done through a new plugin in the CNI, which maps a private IPv4 address to each pod (the one starting with 169). The address is only unique to the underlying host and will not be advertised to Kubernetes anywhere.

You can disable this behavior and make it a pure IPv6 cluster by disabling the plugin in the CNI config - this will cause you to lose IPv4 access to resources in the private subnets however. Depending on regional availability of NAT64 and DNS64 you may lose IPv4 access to the Internet as well.

Could you get the log from the istio ingress pod?