istio: Multi-Cluster/Multi-Network - Cannot use a hostname-based gateway for east-west traffic

Bug description

Following the guide Install Multi-Primary on different networks, everything seems to install as expected without errors and is running in the cluster. For secret/cacerts I am using the example certificate material from samples/certs/*.pem in both cluster1 and cluster2.

When I attempt to verify the installation using the guide Verify the installation the requests are not getting routed to the remote cluster as expected. I am only getting responses from the service on the local cluster:

# From Cluster 1 [where helloworld v1 is deployed]
$ while true; do kubectl exec --context="${CTX_CLUSTER1}" -n sample -c sleep     "$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l \
    app=sleep -o jsonpath='{.items[0].metadata.name}')"     -- curl -s helloworld.sample:5000/hello; done
Hello version: v1, instance: helloworld-v1-578dd69f69-r9lkz
Hello version: v1, instance: helloworld-v1-578dd69f69-r9lkz
Hello version: v1, instance: helloworld-v1-578dd69f69-r9lkz
Hello version: v1, instance: helloworld-v1-578dd69f69-r9lkz
Hello version: v1, instance: helloworld-v1-578dd69f69-r9lkz
...
# From Cluster 2 [where helloworld v2 is deployed]
$ while true; do kubectl exec --context="${CTX_CLUSTER2}" -n sample -c sleep     "$(kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l \
    app=sleep -o jsonpath='{.items[0].metadata.name}')"     -- curl -s helloworld.sample:5000/hello; done
Hello version: v2, instance: helloworld-v2-776f74c475-h5j2q
Hello version: v2, instance: helloworld-v2-776f74c475-h5j2q
Hello version: v2, instance: helloworld-v2-776f74c475-h5j2q
Hello version: v2, instance: helloworld-v2-776f74c475-h5j2q
Hello version: v2, instance: helloworld-v2-776f74c475-h5j2q
...

istioctl proxy-config endpoint for the sleep pod in cluster1 and cluster2 to helloworld destination service:

# Cluster 1
$ istioctl -n sample --context=${CTX_CLUSTER1} proxy-config endpoint "$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}')" | grep helloworld
10.100.1.12:5000                 HEALTHY     OK                outbound|5000||helloworld.sample.svc.cluster.local

# Cluster 2
$ istioctl -n sample --context=${CTX_CLUSTER2} proxy-config endpoint "$(kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}')" | grep helloworld
10.100.2.188:5000                HEALTHY     OK                outbound|5000||helloworld.sample.svc.cluster.local

It seems like maybe something is missing from the docs or example configs, but I understand that there are tests for the docs/examples, which is why I’ve been troubleshooting my own cluster… but just seems like something small is missing

[X] Docs [X] Installation [X] Networking [ ] Performance and Scalability [ ] Extensions and Telemetry [ ] Security [ ] Test and Release [ ] User Experience [ ] Developer Infrastructure [ ] Upgrade

Expected behavior I expected to be able to follow the guide and get the same behavior that the guide expects

Steps to reproduce the bug Following the guides for multi-primary, multi-network install and verify the install

Version (include the output of istioctl version --remote and kubectl version --short and helm version --short if you used Helm)

$ istioctl version
client version: 1.8.0
control plane version: 1.8.0
data plane version: 1.8.0 (4 proxies)

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:09:16Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.12-eks-7684af", GitCommit:"7684af4ac41370dd109ac13817023cb8063e3d45", GitTreeState:"clean", BuildDate:"2020-10-20T22:57:40Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

How was Istio installed? Istio Operator installed with istioctl operator init and the rest of the installation of istio in istio-system is done as steps in the guide [using example manifests & scripts to compile example manifests]

Environment where the bug was observed (cloud vendor, OS, etc) AWS EKS with Kubernetes v1.17 Istio 1.8.0 on Mac

$ uname -a
Darwin OAK-MAC-HLJHD2 19.6.0 Darwin Kernel Version 19.6.0: Mon Aug 31 22:12:52 PDT 2020; root:xnu-6153.141.2~1/RELEASE_X86_64 x86_6

Additionally, please consider running istioctl bug-report and attach the generated cluster-state tarball to this issue. Refer cluster state archive for more details.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 20
  • Comments: 83 (45 by maintainers)

Most upvoted comments

I fix this problem adding temporal elbs ips in ConfigMap this part:

k edit ConfigMap -n istio-system

    meshNetworks: |-
      networks:
        network1:
          endpoints:
            - fromRegistry: cluster1
          gateways:
            - address: 52.21.68.210 (EKS- ELB IP)
              port: 15443
            - address: 100.24.96.26 (EKS - ELB IP)
              port: 15443
        network2:
          endpoints:
            - fromRegistry: cluster2
          gateways:
            - address: 52.127.65.88(AKS IP)
              port: 15443

Later, You need kill istiod pod an reaload the config

This working for me but I wait the better solution and the feature request for support cname in gateways address , because the loadbalancers in AWS use a Elastic_IPs/dynamic ips!

Screen Shot 2021-05-11 at 18 45 40

Feature Request: +1

@nmittler @sonnysideup @stevenctl … Got it working by manually defining meshNetworks so hopefully that helps confirm what is needed. I am surprised nobody else using AWS EKS and Istio 1.8 have run into this issue. Let me know if there’s any other details I can provide which would help. Thanks for your assistance in the meantime!

Here’s my notes from the test [all these resources have been destroy already but can be replicated easily]:

Get Cluster 1 eastwestgateway Host/IP:

$ kubectl --context="${CTX_CLUSTER1}" -n istio-system get svc/istio-eastwestgateway
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)                                                           AGE
istio-eastwestgateway   LoadBalancer   172.20.165.219   a5700a42e952e42568fab49239b17071-2044969611.us-west-2.elb.amazonaws.com   15021:30119/TCP,15443:32032/TCP,15012:30470/TCP,15017:30736/TCP   13m

$ host a5700a42e952e42568fab49239b17071-2044969611.us-west-2.elb.amazonaws.com
a5700a42e952e42568fab49239b17071-2044969611.us-west-2.elb.amazonaws.com has address 44.235.109.1
a5700a42e952e42568fab49239b17071-2044969611.us-west-2.elb.amazonaws.com has address 35.155.121.141

Get Cluster 2 eastwestgateway Host/IP:

$ kubectl --context="${CTX_CLUSTER2}" -n istio-system get svc/istio-eastwestgateway
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)                                                           AGE
istio-eastwestgateway   LoadBalancer   172.20.252.110   a333ff3671278406f804e20da78c800d-1702858926.us-west-2.elb.amazonaws.com   15021:31407/TCP,15443:30253/TCP,15012:32116/TCP,15017:32546/TCP   12m

$ host a333ff3671278406f804e20da78c800d-1702858926.us-west-2.elb.amazonaws.com
a333ff3671278406f804e20da78c800d-1702858926.us-west-2.elb.amazonaws.com has address 44.232.122.66
a333ff3671278406f804e20da78c800d-1702858926.us-west-2.elb.amazonaws.com has address 35.155.125.147

Desired change to data.meshNetworks:

  meshNetworks: |-
    networks:
      network1:
        endpoints:
          - fromRegistry: cluster1
        gateways:
          - address: 44.235.109.1
            port: 15443
          - address: 35.155.121.141
            port: 15443
      network2:
        endpoints:
          - fromRegistry: cluster2
        gateways:
          - address: 44.232.122.66
            port: 15443
          - address: 35.155.125.147
            port: 15443

Define Cluster 1’s configmap/istio data.meshNetworks manually

$ kubectl --context="${CTX_CLUSTER1}" -n istio-system edit configmap/istio

$ kubectl --context="${CTX_CLUSTER1}" -n istio-system get configmap/istio -o yaml
apiVersion: v1
data:
  mesh: |-
    accessLogFile: /dev/stdout
    defaultConfig:
      discoveryAddress: istiod.istio-system.svc:15012
      meshId: mesh1
      proxyMetadata:
        DNS_AGENT: ""
      tracing:
        zipkin:
          address: zipkin.istio-system:9411
    enablePrometheusMerge: true
    rootNamespace: istio-system
    trustDomain: cluster.local
  meshNetworks: |-
    networks:
      network1:
        endpoints:
          - fromRegistry: cluster1
        gateways:
          - address: 44.235.109.1
            port: 15443
          - address: 35.155.121.141
            port: 15443
      network2:
        endpoints:
          - fromRegistry: cluster2
        gateways:
          - address: 44.232.122.66
            port: 15443
          - address: 35.155.125.147
            port: 15443
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"mesh":"accessLogFile: /dev/stdout\ndefaultConfig:\n  discoveryAddress: istiod.istio-system.svc:15012\n  meshId: mesh1\n  proxyMetadata:\n    DNS_AGENT: \"\"\n  tracing:\n    zipkin:\n      address: zipkin.istio-system:9411\nenablePrometheusMerge: true\nrootNamespace: istio-system\ntrustDomain: cluster.local","meshNetworks":"networks: {}"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"install.operator.istio.io/owning-resource":"unknown","install.operator.istio.io/owning-resource-namespace":"istio-system","istio.io/rev":"default","operator.istio.io/component":"Pilot","operator.istio.io/managed":"Reconcile","operator.istio.io/version":"1.8.0","release":"istio"},"name":"istio","namespace":"istio-system"}}
  creationTimestamp: "2020-12-04T18:53:32Z"
  labels:
    install.operator.istio.io/owning-resource: unknown
    install.operator.istio.io/owning-resource-namespace: istio-system
    istio.io/rev: default
    operator.istio.io/component: Pilot
    operator.istio.io/managed: Reconcile
    operator.istio.io/version: 1.8.0
    release: istio
  name: istio
  namespace: istio-system
  resourceVersion: "8054"
  selfLink: /api/v1/namespaces/istio-system/configmaps/istio
  uid: 134aeebe-19e1-4ed3-8347-d1069c7468ee

Define Cluster 2’s configmap/istio data.meshNetworks manually

$ kubectl --context="${CTX_CLUSTER2}" -n istio-system edit configmap/istio

$ kubectl --context="${CTX_CLUSTER2}" -n istio-system get configmap/istio -o yaml
apiVersion: v1
data:
  mesh: |-
    accessLogFile: /dev/stdout
    defaultConfig:
      discoveryAddress: istiod.istio-system.svc:15012
      meshId: mesh1
      proxyMetadata:
        DNS_AGENT: ""
      tracing:
        zipkin:
          address: zipkin.istio-system:9411
    enablePrometheusMerge: true
    rootNamespace: istio-system
    trustDomain: cluster.local
  meshNetworks: |-
    networks:
      network1:
        endpoints:
          - fromRegistry: cluster1
        gateways:
          - address: 44.235.109.1
            port: 15443
          - address: 35.155.121.141
            port: 15443
      network2:
        endpoints:
          - fromRegistry: cluster2
        gateways:
          - address: 44.232.122.66
            port: 15443
          - address: 35.155.125.147
            port: 15443
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"mesh":"accessLogFile: /dev/stdout\ndefaultConfig:\n  discoveryAddress: istiod.istio-system.svc:15012\n  meshId: mesh1\n  proxyMetadata:\n    DNS_AGENT: \"\"\n  tracing:\n    zipkin:\n      address: zipkin.istio-system:9411\nenablePrometheusMerge: true\nrootNamespace: istio-system\ntrustDomain: cluster.local","meshNetworks":"networks: {}"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"install.operator.istio.io/owning-resource":"unknown","install.operator.istio.io/owning-resource-namespace":"istio-system","istio.io/rev":"default","operator.istio.io/component":"Pilot","operator.istio.io/managed":"Reconcile","operator.istio.io/version":"1.8.0","release":"istio"},"name":"istio","namespace":"istio-system"}}
  creationTimestamp: "2020-12-04T18:55:04Z"
  labels:
    install.operator.istio.io/owning-resource: unknown
    install.operator.istio.io/owning-resource-namespace: istio-system
    istio.io/rev: default
    operator.istio.io/component: Pilot
    operator.istio.io/managed: Reconcile
    operator.istio.io/version: 1.8.0
    release: istio
  name: istio
  namespace: istio-system
  resourceVersion: "8089"
  selfLink: /api/v1/namespaces/istio-system/configmaps/istio
  uid: 06c181d7-5dd7-477e-9591-f51a680a409

istiod logs confirming configmap changes are picked up:

2020-12-04T19:11:21.877673Z    info    mesh networks configuration updated to: {                                                                                                                                                                                                                                                                          │
│     "networks": {                                                                                                                                                                                                                                                                                                                                         │
│         "network1": {                                                                                                                                                                                                                                                                                                                                     │
│             "endpoints": [                                                                                                                                                                                                                                                                                                                                │
│                 {                                                                                                                                                                                                                                                                                                                                         │
│                     "fromRegistry": "cluster1"                                                                                                                                                                                                                                                                                                            │
│                 }                                                                                                                                                                                                                                                                                                                                         │
│             ],                                                                                                                                                                                                                                                                                                                                            │
│             "gateways": [                                                                                                                                                                                                                                                                                                                                 │
│                 {                                                                                                                                                                                                                                                                                                                                         │
│                     "address": "44.235.109.1",                                                                                                                                                                                                                                                                                                            │
│                     "port": 15443                                                                                                                                                                                                                                                                                                                         │
│                 },                                                                                                                                                                                                                                                                                                                                        │
│                 {                                                                                                                                                                                                                                                                                                                                         │
│                     "address": "35.155.121.141",                                                                                                                                                                                                                                                                                                          │
│                     "port": 15443                                                                                                                                                                                                                                                                                                                         │
│                 }                                                                                                                                                                                                                                                                                                                                         │
│             ]                                                                                                                                                                                                                                                                                                                                             │
│         },                                                                                                                                                                                                                                                                                                                                                │
│         "network2": {                                                                                                                                                                                                                                                                                                                                     │
│             "endpoints": [                                                                                                                                                                                                                                                                                                                                │
│                 {                                                                                                                                                                                                                                                                                                                                         │
│                     "fromRegistry": "cluster2"                                                                                                                                                                                                                                                                                                            │
│                 }                                                                                                                                                                                                                                                                                                                                         │
│             ],                                                                                                                                                                                                                                                                                                                                            │
│             "gateways": [                                                                                                                                                                                                                                                                                                                                 │
│                 {                                                                                                                                                                                                                                                                                                                                         │
│                     "address": "44.232.122.66",                                                                                                                                                                                                                                                                                                           │
│                     "port": 15443                                                                                                                                                                                                                                                                                                                         │
│                 },                                                                                                                                                                                                                                                                                                                                        │
│                 {                                                                                                                                                                                                                                                                                                                                         │
│                     "address": "35.155.125.147",                                                                                                                                                                                                                                                                                                          │
│                     "port": 15443                                                                                                                                                                                                                                                                                                                         │
│                 }                                                                                                                                                                                                                                                                                                                                         │
│             ]                                                                                                                                                                                                                                                                                                                                             │
│         }                                                                                                                                                                                                                                                                                                                                                 │
│     }                                                                                                                                                                                                                                                                                                                                                     │
│ }

It works - we see responses from v1 and v2!

$ kubectl exec --context="${CTX_CLUSTER1}" -n sample -c sleep     "$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l \
    app=sleep -o jsonpath='{.items[0].metadata.name}')"     -- sh -c "while true; do curl -s helloworld.sample:5000/hello; done"
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
EXTERNAL-IP
a5e21e07fd1a64a518ab6c02b4dfb9f5-826145575.us-west-2.elb.amazonaws.com

This may be the issue. If you kubectl --context="${CTX_CLUSTER1}" get svc istio-eastwestgateway -n istio-system -oyaml is there a IP address at either status.LoadBalancer.Ingress or under spec.ExternalIPs? Those are the only two address types we allow for auto-gateway discovery (via that topology.istio.io/network label on the Service).

If you know the IP you may be able to use a legacy type of configuration meshNetworks to manually specify the addresses to use for the gateways:

values:
  global:
    meshNetworks:
      network1:
        endpoints:
        - fromRegistry: cluster1
        gateways:
        - address: 1.2.3.4
          port: 15443
      network2:
        endpoints:
        - fromRegistry: cluster2
        gateways:
        - address: 5.6.7.8
          port: 15443

This would be included in the install operator for all clusters and would need to be identical in every cluster in the mesh.

More info: https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#MeshNetworks

Internal and yes they won’t change. We manage creation of those NLBs via ALB ingress controller. We just run this job in cron so that if in any case we need to delete/recreate this infra we don’t want to be in the business of manually updating these configmaps and service entries. The Job only updates if there are any changes to the IPs.

It can be a simple k8s job as opposed to cron as well.

@bryankaraffa I’m glad you got this working. I think we definitely need to do:

  1. more investigation into AWS, getting stable IPs for ingress etc.
  2. whether or not we can support hostnames for the gateway.
  • I’m not very confident, due to the way we implement multicluster by including the gateway as one of the lb_endpoints in an EDS type cluster

https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#Network-IstioNetworkGateway

The address field should support an externally resolvable hostname. I think we should be able to support auto-discovering the LoadBalancer.hostname field as well, I’ll open a separate issue for adding that and hopefully it will be available in a future release.

For the time being, we should add a section to the doc explaining this alternative config when there isn’t an IP.

@tr-srij you can share you script? I written a script in goland by I need more time for testing.

@nmittler @sonnysideup @stevenctl … Got it working by manually defining meshNetworks so hopefully that helps confirm what is needed. I am surprised nobody else using AWS EKS and Istio 1.8 have run into this issue. Let me know if there’s any other details I can provide which would help. Thanks for your assistance in the meantime!

Here’s my notes from the test [all these resources have been destroy already but can be replicated easily]:

Get Cluster 1 eastwestgateway Host/IP:

$ kubectl --context="${CTX_CLUSTER1}" -n istio-system get svc/istio-eastwestgateway
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)                                                           AGE
istio-eastwestgateway   LoadBalancer   172.20.165.219   a5700a42e952e42568fab49239b17071-2044969611.us-west-2.elb.amazonaws.com   15021:30119/TCP,15443:32032/TCP,15012:30470/TCP,15017:30736/TCP   13m

$ host a5700a42e952e42568fab49239b17071-2044969611.us-west-2.elb.amazonaws.com
a5700a42e952e42568fab49239b17071-2044969611.us-west-2.elb.amazonaws.com has address 44.235.109.1
a5700a42e952e42568fab49239b17071-2044969611.us-west-2.elb.amazonaws.com has address 35.155.121.141

Get Cluster 2 eastwestgateway Host/IP:

$ kubectl --context="${CTX_CLUSTER2}" -n istio-system get svc/istio-eastwestgateway
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)                                                           AGE
istio-eastwestgateway   LoadBalancer   172.20.252.110   a333ff3671278406f804e20da78c800d-1702858926.us-west-2.elb.amazonaws.com   15021:31407/TCP,15443:30253/TCP,15012:32116/TCP,15017:32546/TCP   12m

$ host a333ff3671278406f804e20da78c800d-1702858926.us-west-2.elb.amazonaws.com
a333ff3671278406f804e20da78c800d-1702858926.us-west-2.elb.amazonaws.com has address 44.232.122.66
a333ff3671278406f804e20da78c800d-1702858926.us-west-2.elb.amazonaws.com has address 35.155.125.147

Desired change to data.meshNetworks:

  meshNetworks: |-
    networks:
      network1:
        endpoints:
          - fromRegistry: cluster1
        gateways:
          - address: 44.235.109.1
            port: 15443
          - address: 35.155.121.141
            port: 15443
      network2:
        endpoints:
          - fromRegistry: cluster2
        gateways:
          - address: 44.232.122.66
            port: 15443
          - address: 35.155.125.147
            port: 15443

Define Cluster 1’s configmap/istio data.meshNetworks manually

$ kubectl --context="${CTX_CLUSTER1}" -n istio-system edit configmap/istio

$ kubectl --context="${CTX_CLUSTER1}" -n istio-system get configmap/istio -o yaml
apiVersion: v1
data:
  mesh: |-
    accessLogFile: /dev/stdout
    defaultConfig:
      discoveryAddress: istiod.istio-system.svc:15012
      meshId: mesh1
      proxyMetadata:
        DNS_AGENT: ""
      tracing:
        zipkin:
          address: zipkin.istio-system:9411
    enablePrometheusMerge: true
    rootNamespace: istio-system
    trustDomain: cluster.local
  meshNetworks: |-
    networks:
      network1:
        endpoints:
          - fromRegistry: cluster1
        gateways:
          - address: 44.235.109.1
            port: 15443
          - address: 35.155.121.141
            port: 15443
      network2:
        endpoints:
          - fromRegistry: cluster2
        gateways:
          - address: 44.232.122.66
            port: 15443
          - address: 35.155.125.147
            port: 15443
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"mesh":"accessLogFile: /dev/stdout\ndefaultConfig:\n  discoveryAddress: istiod.istio-system.svc:15012\n  meshId: mesh1\n  proxyMetadata:\n    DNS_AGENT: \"\"\n  tracing:\n    zipkin:\n      address: zipkin.istio-system:9411\nenablePrometheusMerge: true\nrootNamespace: istio-system\ntrustDomain: cluster.local","meshNetworks":"networks: {}"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"install.operator.istio.io/owning-resource":"unknown","install.operator.istio.io/owning-resource-namespace":"istio-system","istio.io/rev":"default","operator.istio.io/component":"Pilot","operator.istio.io/managed":"Reconcile","operator.istio.io/version":"1.8.0","release":"istio"},"name":"istio","namespace":"istio-system"}}
  creationTimestamp: "2020-12-04T18:53:32Z"
  labels:
    install.operator.istio.io/owning-resource: unknown
    install.operator.istio.io/owning-resource-namespace: istio-system
    istio.io/rev: default
    operator.istio.io/component: Pilot
    operator.istio.io/managed: Reconcile
    operator.istio.io/version: 1.8.0
    release: istio
  name: istio
  namespace: istio-system
  resourceVersion: "8054"
  selfLink: /api/v1/namespaces/istio-system/configmaps/istio
  uid: 134aeebe-19e1-4ed3-8347-d1069c7468ee

Define Cluster 2’s configmap/istio data.meshNetworks manually

$ kubectl --context="${CTX_CLUSTER2}" -n istio-system edit configmap/istio

$ kubectl --context="${CTX_CLUSTER2}" -n istio-system get configmap/istio -o yaml
apiVersion: v1
data:
  mesh: |-
    accessLogFile: /dev/stdout
    defaultConfig:
      discoveryAddress: istiod.istio-system.svc:15012
      meshId: mesh1
      proxyMetadata:
        DNS_AGENT: ""
      tracing:
        zipkin:
          address: zipkin.istio-system:9411
    enablePrometheusMerge: true
    rootNamespace: istio-system
    trustDomain: cluster.local
  meshNetworks: |-
    networks:
      network1:
        endpoints:
          - fromRegistry: cluster1
        gateways:
          - address: 44.235.109.1
            port: 15443
          - address: 35.155.121.141
            port: 15443
      network2:
        endpoints:
          - fromRegistry: cluster2
        gateways:
          - address: 44.232.122.66
            port: 15443
          - address: 35.155.125.147
            port: 15443
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"mesh":"accessLogFile: /dev/stdout\ndefaultConfig:\n  discoveryAddress: istiod.istio-system.svc:15012\n  meshId: mesh1\n  proxyMetadata:\n    DNS_AGENT: \"\"\n  tracing:\n    zipkin:\n      address: zipkin.istio-system:9411\nenablePrometheusMerge: true\nrootNamespace: istio-system\ntrustDomain: cluster.local","meshNetworks":"networks: {}"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"install.operator.istio.io/owning-resource":"unknown","install.operator.istio.io/owning-resource-namespace":"istio-system","istio.io/rev":"default","operator.istio.io/component":"Pilot","operator.istio.io/managed":"Reconcile","operator.istio.io/version":"1.8.0","release":"istio"},"name":"istio","namespace":"istio-system"}}
  creationTimestamp: "2020-12-04T18:55:04Z"
  labels:
    install.operator.istio.io/owning-resource: unknown
    install.operator.istio.io/owning-resource-namespace: istio-system
    istio.io/rev: default
    operator.istio.io/component: Pilot
    operator.istio.io/managed: Reconcile
    operator.istio.io/version: 1.8.0
    release: istio
  name: istio
  namespace: istio-system
  resourceVersion: "8089"
  selfLink: /api/v1/namespaces/istio-system/configmaps/istio
  uid: 06c181d7-5dd7-477e-9591-f51a680a409

istiod logs confirming configmap changes are picked up:

2020-12-04T19:11:21.877673Z    info    mesh networks configuration updated to: {                                                                                                                                                                                                                                                                          │
│     "networks": {                                                                                                                                                                                                                                                                                                                                         │
│         "network1": {                                                                                                                                                                                                                                                                                                                                     │
│             "endpoints": [                                                                                                                                                                                                                                                                                                                                │
│                 {                                                                                                                                                                                                                                                                                                                                         │
│                     "fromRegistry": "cluster1"                                                                                                                                                                                                                                                                                                            │
│                 }                                                                                                                                                                                                                                                                                                                                         │
│             ],                                                                                                                                                                                                                                                                                                                                            │
│             "gateways": [                                                                                                                                                                                                                                                                                                                                 │
│                 {                                                                                                                                                                                                                                                                                                                                         │
│                     "address": "44.235.109.1",                                                                                                                                                                                                                                                                                                            │
│                     "port": 15443                                                                                                                                                                                                                                                                                                                         │
│                 },                                                                                                                                                                                                                                                                                                                                        │
│                 {                                                                                                                                                                                                                                                                                                                                         │
│                     "address": "35.155.121.141",                                                                                                                                                                                                                                                                                                          │
│                     "port": 15443                                                                                                                                                                                                                                                                                                                         │
│                 }                                                                                                                                                                                                                                                                                                                                         │
│             ]                                                                                                                                                                                                                                                                                                                                             │
│         },                                                                                                                                                                                                                                                                                                                                                │
│         "network2": {                                                                                                                                                                                                                                                                                                                                     │
│             "endpoints": [                                                                                                                                                                                                                                                                                                                                │
│                 {                                                                                                                                                                                                                                                                                                                                         │
│                     "fromRegistry": "cluster2"                                                                                                                                                                                                                                                                                                            │
│                 }                                                                                                                                                                                                                                                                                                                                         │
│             ],                                                                                                                                                                                                                                                                                                                                            │
│             "gateways": [                                                                                                                                                                                                                                                                                                                                 │
│                 {                                                                                                                                                                                                                                                                                                                                         │
│                     "address": "44.232.122.66",                                                                                                                                                                                                                                                                                                           │
│                     "port": 15443                                                                                                                                                                                                                                                                                                                         │
│                 },                                                                                                                                                                                                                                                                                                                                        │
│                 {                                                                                                                                                                                                                                                                                                                                         │
│                     "address": "35.155.125.147",                                                                                                                                                                                                                                                                                                          │
│                     "port": 15443                                                                                                                                                                                                                                                                                                                         │
│                 }                                                                                                                                                                                                                                                                                                                                         │
│             ]                                                                                                                                                                                                                                                                                                                                             │
│         }                                                                                                                                                                                                                                                                                                                                                 │
│     }                                                                                                                                                                                                                                                                                                                                                     │
│ }

It works - we see responses from v1 and v2!

$ kubectl exec --context="${CTX_CLUSTER1}" -n sample -c sleep     "$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l \
    app=sleep -o jsonpath='{.items[0].metadata.name}')"     -- sh -c "while true; do curl -s helloworld.sample:5000/hello; done"
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v1, instance: helloworld-v1-578dd69f69-x2nv9
Hello version: v2, instance: helloworld-v2-776f74c475-skrtm

I dont know the capability in AWS, hence this question on Azure AKS, LB type service support statically assigning IP (both private and public IP) to the Loadbalancer type Kubernetes service using property: loadBalancerIP: Does AWS support a static IP on LB type service used for EW gateway ?

EXTERNAL-IP
a5e21e07fd1a64a518ab6c02b4dfb9f5-826145575.us-west-2.elb.amazonaws.com

This may be the issue. If you kubectl --context="${CTX_CLUSTER1}" get svc istio-eastwestgateway -n istio-system -oyaml is there a IP address at either status.LoadBalancer.Ingress or under spec.ExternalIPs? Those are the only two address types we allow for auto-gateway discovery (via that topology.istio.io/network label on the Service).

@nmittler – confirming there are no IPs under status.LoadBalancer.Ingress or spec.ExternalIPs

$ kubectl get --context=$CTX_CLUSTER1 svc istio-eastwestgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"istio-eastwestgateway","install.operator.istio.io/owning-resource":"eastwest","install.operator.istio.io/owning-resource-namespace":"istio-system","istio":"eastwestgateway","istio.io/rev":"default","operator.istio.io/component":"IngressGateways","operator.istio.io/managed":"Reconcile","operator.istio.io/version":"1.8.0","release":"istio","topology.istio.io/network":"network1"},"name":"istio-eastwestgateway","namespace":"istio-system"},"spec":{"ports":[{"name":"status-port","port":15021,"protocol":"TCP","targetPort":15021},{"name":"mtls","port":15443,"protocol":"TCP","targetPort":15443},{"name":"tcp-istiod","port":15012,"protocol":"TCP","targetPort":15012},{"name":"tcp-webhook","port":15017,"protocol":"TCP","targetPort":15017}],"selector":{"app":"istio-eastwestgateway","istio":"eastwestgateway","topology.istio.io/network":"network1"},"type":"LoadBalancer"}}
  creationTimestamp: "2020-12-03T19:19:35Z"
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app: istio-eastwestgateway
    install.operator.istio.io/owning-resource: eastwest
    install.operator.istio.io/owning-resource-namespace: istio-system
    istio: eastwestgateway
    istio.io/rev: default
    operator.istio.io/component: IngressGateways
    operator.istio.io/managed: Reconcile
    operator.istio.io/version: 1.8.0
    release: istio
    topology.istio.io/network: network1
  name: istio-eastwestgateway
  namespace: istio-system
  resourceVersion: "2493"
  selfLink: /api/v1/namespaces/istio-system/services/istio-eastwestgateway
  uid: af17e671-8853-4235-a758-66e8a5b8617d
spec:
  clusterIP: 172.20.81.177
  externalTrafficPolicy: Cluster
  ports:
  - name: status-port
    nodePort: 30079
    port: 15021
    protocol: TCP
    targetPort: 15021
  - name: mtls
    nodePort: 30893
    port: 15443
    protocol: TCP
    targetPort: 15443
  - name: tcp-istiod
    nodePort: 32378
    port: 15012
    protocol: TCP
    targetPort: 15012
  - name: tcp-webhook
    nodePort: 31536
    port: 15017
    protocol: TCP
    targetPort: 15017
  selector:
    app: istio-eastwestgateway
    istio: eastwestgateway
    topology.istio.io/network: network1
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - hostname: aaf17e67188534235a75866e8a5b8617-938382465.us-west-2.elb.amazonaws.com

This seems similar/related to what I ran into with multi-cluster setup on Istio 1.7 on AWS EKS, on this step where you get the remote cluster ingress hostname… The docs suggested using -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}' but I had to use -o jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}' to get a valid value for the ServiceEntry in next step

Same on my current cluster as well:

$ kubectl get --context=$CTX_CLUSTER2 svc --selector=app=istio-eastwestgateway     -n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}'

$ kubectl get --context=$CTX_CLUSTER2 svc --selector=app=istio-eastwestgateway     -n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}'
aaf17e67188534235a75866e8a5b8617-938382465.us-west-2.elb.amazonaws.com

@rinormaloku FYI the reason the Service must exist in both clusters is so that the Service’s hostname resolves to some IP just to get the request out of the client workload and to its sidecar proxy.

@sonnysideup @bryankaraffa

The cluster name in the context field of the remote secret is not what istiod will use for the cluster name. Rather the data key under string data should be verified. These are likely the same, but worth checking. Do the clusterIDs in your logs match what you have in your istio operator config? Or are those generated names that might be assumed from your local kubeconfig file?

Also curious what happens if you restart istiod.

Here is my workaround for the problem as a shell script running in a CronJob: https://github.com/markszabo/istio-crosscluster-workaround-for-eks Any feedback is welcome!

I’m experiencing this same issue

2021-09-28T10:13:34.603924Z	warn	model	Failed parsing gateway address a9bb0aca6a026447b953d4b5c02d49ef-a670fe8f0737be62.elb.eu-central-1.amazonaws.com from Service Registry. Hostnames are not supported for gateways
2021-09-28T10:13:34.603947Z	warn	model	Failed parsing gateway address a9bb0aca6a026447b953d4b5c02d49ef-a670fe8f0737be62.elb.eu-central-1.amazonaws.com from Service Registry. Hostnames are not supported for gateways

@markszabo not sure what’s up, but your cert still seem borked and chrome is rejecting. I downloaded the cert with openssl (visible here) and it looks pretty borked.

Back to the matter at hand though – I’m not sure if any one else has hit this when applying the workaround to a multi primary deployment, but I allocated EIP IPs to two NLBs here and updated the associated config, and when proxy status endpoints is borked for the sample workloads and I see the envoy error LbEndpointValidationError.LoadBalancingWeight: value must be greater than or equal to 1 in the istiod logs (here from west):

2021-09-23T13:08:01.396294Z	warn	ads	ADS:EDS: ACK ERROR sleep-557747455f-8mtxg.sample-105 Internal:Proto constraint validation failed (ClusterLoadAssignmentValidationError.Endpoints[0]: embedded message failed validation | caused by LocalityLbEndpointsValidationError.LbEndpoints[1]: embedded message failed validation | caused by LbEndpointValidationError.LoadBalancingWeight: value must be greater than or equal to 1): cluster_name: "outbound|80||sleep.sample.svc.cluster.local"
endpoints {
  locality {
    region: "us-east-1"
    zone: "us-east-1a"
  }
  lb_endpoints {
    endpoint {
      address {
        socket_address {
          address: "US-EAST-1A-IP"
          port_value: 15443
        }
      }
    }
    metadata {
      filter_metadata {
        key: "envoy.transport_socket_match"
        value {
          fields {
            key: "tlsMode"
            value {
              string_value: "istio"
            }
          }
        }
      }
      filter_metadata {
        key: "istio"
        value {
          fields {
            key: "workload"
            value {
              string_value: ";;;;us-west-2-cluster"
            }
          }
        }
      }
    }
    load_balancing_weight {
      value: 1
    }
  }
  lb_endpoints {
    endpoint {
      address {
        socket_address {
          address: "US-EAST-1B-IP"
          port_value: 15443
        }
      }
    }
    metadata {
      filter_metadata {
        key: "envoy.transport_socket_match"
        value {
          fields {
            key: "tlsMode"
            value {
              string_value: "istio"
            }
          }
        }
      }
      filter_metadata {
        key: "istio"
        value {
          fields {
            key: "workload"
            value {
              string_value: ";;;;us-west-2-cluster"
            }
          }
        }
      }
    }
    load_balancing_weight {
    }
  }
...

Fwiw, kubectl get cm -n istio-system istio -ojsonpath='{.data.meshNetworks}':

networks:
  us-east-1-vpc-id:
    endpoints:
    - fromRegistry: us-east-1-cluster
    gateways:
    - address: US-EAST-1A-IP
      port: 15443
    - address: US-EAST-1B-IP
      port: 15443
    - address: US-EAST-1C-IP
      port: 15443
  us-west-2-vpc-id:
    endpoints:
    - fromRegistry: us-west-2-cluster
    gateways:
    - address: US-WEST-2A-IP
      port: 15443
    - address: US-WEST-2B-IP
      port: 15443
    - address: US-WEST-2C-IP
      port: 15443

I run into this issue too, and ended up writing down the workaround described above: https://szabo.jp/2021/09/22/multicluster-istio-on-eks/

@carnei-ro that’s a great suggestion – I’ve put it in this doc and hopefully we can prioritize the impl soon

https://docs.google.com/document/d/1Sbg6hyO9NAOagtHxsg6H-OlQKoTpy8zaXVEphlj5R-M/edit?usp=sharing&resourcekey=0-Aqzm_-tOzxlN46Qijpf7cw

The current implementation just resolves each time EDS is pushed, but using the TTL and pushing endpoints when it’s about to expire could make it a bit more robust.

By using the TTL it would be possible to use ELB Classic (the CNAME points to “ephemerals” IPs)

I’m running in to the same issue. I followed the installation guide, but unfortunately it did not work. I’m using kubernetes-kind. Is it possible to achieve communication between clusters with kind and istio eastwest-gateway?

Although if it’s static, users could also just resolve the IP themselves, and set externalIPs on their gateway Service, which doesn’t hide the underlying issue.

After doing a bit of research in envoy issues/docs, confirmed that isn’t a way to get DNS resolution with EDS clusters in envoy… the two options I’ve been able to find are:

  1. Resolve DNS in your control plane, which has obvious issues when the IP is dynamic.
  2. Use STRICT/LOGICAL_DNS clusters which as @howardjohn pointed out would have pretty bad performance, as each time in-cluster Pods endpoints change, we’d have to update all push all clusters to envoy and close connections.

It might make sense to have a feature-flag to re-enable the eager DNS resolution in the short term, for users who are confident that the IP is static.

@bryankaraffa I got this working somewhat in AWS but let me re-describe my full setup for context:

  • Using Primary-Remote on different networks setup
  • Running two AWS EKS clusters inside separate VPCs
  • Each EKS cluster spans 2 AZs with 2 public and 2 private subnets
  • Currently creating LBs for K8s API / Istio ingress-gateway / Istio eastwest-gateway inside public subnets (i.e. open access)
  • Pre-allocated EIPs for eastwest gateway NLBs in each VPC.

1. Allocate NLBs for Gateways

I configured the serviceAnnotations section of my eastwest-gateway manifest to create an NLB and associate my pre-allocated EIPs. After a brief period of time, the NLBs became active and the target groups are registered as “healthy”.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: cdps
spec:
  revision: ""
  profile: empty
  components:
    ingressGateways:
      - name: istio-cdps-gateway
        label:
          istio: cdps-gateway
          app: istio-cdps-gateway
          topology.istio.io/network: <cluster-network-name>
        enabled: true
        k8s:
          hpaSpec:
            # this number should be the same as the number of AZs your NLB spans, otherwise 
            minReplicas: 2
          affinity:
            # spread gateway pods across hosts (its probably better to use the AZ as the term)
            podAntiAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
                - weight: 100
                  podAffinityTerm:
                    labelSelector:
                      matchExpressions:
                        - key: app
                          operator: In
                          values:
                            - istio-cdps-gateway
                    topologyKey: kubernetes.io/hostname
          serviceAnnotations:
            service.beta.kubernetes.io/aws-load-balancer-type: nlb
            # allocate an EIP for each AZ your NLB spans
            service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-XXXX,eipalloc-YYYY
            service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
          env:
            # sni-dnat adds the clusters required for AUTO_PASSTHROUGH mode
            - name: ISTIO_META_ROUTER_MODE
              value: "sni-dnat"
            # traffic through this gateway should be routed inside the network
            - name: ISTIO_META_REQUESTED_NETWORK_VIEW
              value: dp-network
          service:
            # without this, istiod will report "TLS handshake error from" gateway when connecting
            externalTrafficPolicy: Local

2. Update istio.istio-system ConfigMap

Manually updating the meshNetworks section (https://github.com/istio/istio/issues/29359#issuecomment-738970767) of these configmaps is still required. What’s different here is that the EIPs are stable and will NOT change over time. This appears to be a reasonable setup for a production deployment and should work until support for ELB hostnames becomes available.

@bryankaraffa Looks like that validation added in https://github.com/istio/istio/pull/23311 conflicts with other logic we have in pilot. It does still seem risky to use a hostname here the way hostname support is currently implemented (eagerly resolving DNS rather than resolving it at the proxy).

I followed the procedure exactly as described in the 1.8 multi primary multi network and it worked without any issues. I was able to successfully complete the multicluster verification as described here --> https://istio.io/latest/docs/setup/install/multicluster/verify/ Matter of fact, the whole procedure worked perfect the very first time and under an hour I was up and running with multi cluster.

Few things I found out during the procedure (they are documented well in the instructions)

You have to create stub namespace and stub service in cluster1 (same namespace and same service name) to access the service in cluster2. This is well documented in the procedure. But just pointing out that this is an important step. So is the remote secret. Cluster1 should be able to access the kubeapi on clsuter2 and vice versa. If you are running hosted Kubernetes make sure your Kube api is accessible from both clusters. If you have IP ACLs on kube api, make sure you allow access from the other cluster. Also cluster 1 should be able to access the east west gateway on cluster2 and vice versa. Once again IP ACLS or Network security group etc need to allow this.

Yeah, I created the service in both the primary and remote clusters, and I deployed the actual workload to the remote cluster. I’m not able to address the desired service without creating it inside the primary; I guess service mirroring is not supported. Whenever you do that, do you see endpoints created inside your primary cluster?

After I create the helloworld app inside the remote cluster, I see the following logs from istiod inside the primary cluster:

istiod-655559c95b-q7425 discovery 2020-12-03T13:35:37.473614Z   info    ads     Incremental push, service helloworld.example.svc.cluster.local has no endpoints
istiod-655559c95b-q7425 discovery 2020-12-03T13:35:37.573743Z   info    ads     Push debounce stable[93] 1: 100.081671ms since last change, 100.08147ms since last push, full=false
istiod-655559c95b-q7425 discovery 2020-12-03T13:35:37.573812Z   info    ads     XDS: Incremental Pushing:2020-12-03T13:30:33Z/50 ConnectedEndpoints:3

This looks suspicous and makes me think something is awry otherwise I’d end up with endpoints being created inside the primary cluster, no?

Unfortunately, that step has already been performed. 😢

covid19:istio-files sonny$ kubectl --context control-plane get ns istio-system --show-labels
NAME           STATUS   AGE     LABELS
istio-system   Active   3h36m   topology.istio.io/network=cp-network
covid19:istio-files sonny$ kubectl --context data-plane get ns istio-system --show-labels
NAME           STATUS   AGE     LABELS
istio-system   Active   3h34m   topology.istio.io/network=dp-network

Nothing else readily comes to mind regarding setup steps that I may have missed. I can see that the remote helloworld deployment has been sync’d inside the primary cluster

covid19:istio-files sonny$ istioctl --context control-plane proxy-status
NAME                                                   CDS        LDS        EDS          RDS          ISTIOD                      VERSION
helloworld-v2-7855866d4f-6lg7h.compute                 SYNCED     SYNCED     SYNCED       SYNCED       istiod-655559c95b-q7425     1.8.0
istio-cdps-gateway-579665f6b6-chl8p.istio-system       SYNCED     SYNCED     SYNCED       NOT SENT     istiod-655559c95b-q7425     1.8.0
istio-cdps-gateway-78df8697fb-htfn6.istio-system       SYNCED     SYNCED     SYNCED       NOT SENT     istiod-655559c95b-q7425     1.8.0
istio-ingressgateway-5779d8cd47-th7cm.istio-system     SYNCED     SYNCED     NOT SENT     NOT SENT     istiod-655559c95b-q7425     1.8.0
istio-ingressgateway-9469967d9-6hfr9.istio-system      SYNCED     SYNCED     NOT SENT     NOT SENT     istiod-655559c95b-q7425     1.8.0
sleep-8f795f47d-4xjwf.platform                         SYNCED     SYNCED     SYNCED       SYNCED       istiod-655559c95b-q7425     1.8.0

and the sleep pod inside the primary cluster where I’m performing verification is also up-to-date.

pieper:istio-files sonny$ istioctl --context control-plane proxy-status sleep-8f795f47d-4xjwf.platform
Clusters Match
Listeners Match
Routes Match (RDS last loaded at Thu, 03 Dec 2020 03:05:43 MST)

The only option I can think of right now is to enable DEBUG logging for istiod / istio-proxy and see if anything pops up there.