istio: Envoy proxy is NOT ready: config not received from Pilot when deploying shared control plane

Bug description Hi, I’m installing istio between two k8s cluster according to shared control plane. Now the istio-ingressgateway on Remote Cluster print error logs:

Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected

I find the argument --discoveryAddress is istiod-remote.istio-system and there is a svc named istiod-remote in istio-system.

Is there anything I need to do?

Expected behavior

Steps to reproduce the bug

Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)

  • istio version: 1.5.2
  • kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T20:55:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

How was Istio installed? I use istioctl --context=<cluster-context> manifest apply -f <modified-profiles> For Main Cluster, the profiles is as below.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
spec:
  hub: docker.io/istio
  tag: 1.5.2
...
   multiCluster:
        enabled: true
        clusterName: "main-cluster"
      omitSidecarInjectorConfigMap: false
      network: "main-cluster-network"
    meshNetworks:
        main-cluster-network:
          endpoints:
          - fromRegistry: Kubernetes
          gateways:
          - registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
            port: 443
        remote-cluster-network:
          endpoints:
          - fromRegistry: remote-cluster
          gateways:
          - registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
            port: 443
 ...

And the following is on Remote Cluster

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
spec:
  hub: docker.io/istio
  tag: 1.5.2
...
    multiCluster:
        enabled: true
        clusterName: "remote-cluster"
    network: "remote-cluster-network"
        remotePilotAddress: 10.7.12.185

Environment where bug was observed (cloud vendor, OS, etc)

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 18 (10 by maintainers)

Most upvoted comments

@packyzbq did you verify that the two cluster have the same certs configured?

kubectl --context=${MAIN_CLUSTER_CTX} -n istio-system get configmap istio-ca-root-cert -o yaml
apiVersion: v1
data:
  root-cert.pem: |
    -----BEGIN CERTIFICATE-----
    ABCDEF.....
    ...
    ...
   XYZ
    -----END CERTIFICATE-----

Should have the same value in REMOTE cluster:

$ kubectl --context=${REMOTE_CLUSTER_CTX} -n istio-system get configmap istio-ca-root-cert -o yaml
apiVersion: v1
data:
  root-cert.pem: |
    -----BEGIN CERTIFICATE-----
    ABCDEF.....
    ...
    ...
   XYZ
    -----END CERTIFICATE-----

When you create the cacerts secret you need to make sure the same secret is created in both clusters

The remotePilotAddress of 10.7.12.185 must be reachable by pods in the remote cluster. Are proper firewall policies in place? Usually the ExternalIP of a LoadBalancer isn’t a 10. address (see RC1918). How did you determine that value for the remotePilotAddress?

Also, if you want you direct pod-to-pod communication without going through a gateway, both clusters should use the same value for network.