istio: istio-proxy on app pod reported "envoy.config.listener.v3.Listener rejected, has duplicate address"
Bug Description
-
Create an istio primary cluster on dual-stack k8s (want to create a istio multiple clusters env)
-
Deploy a test applicaiton on this primary cluster (use sample app included in istio delivery package) kubectl create namespace sample kubectl label namespace sample istio-injection=enabled kubectl apply -f samples/helloworld/helloworld.yaml -l service=helloworld -n sample kubectl apply -f samples/helloworld/helloworld.yaml -l version=v2 -n sample
-
istio-proxy in helloworld-v2 reported below errors (192.168.2.127_9100’ has duplicate address '192.168.2.127:9100,[2001:2:127:1::ab06]:9100): 2023-02-02T06:48:37.340830Z info JWT policy is third-party-jwt 2023-02-02T06:48:37.340834Z info using credential fetcher of JWT type in cluster.local trust domain 2023-02-02T06:48:37.441084Z info Workload SDS socket not found. Starting Istio SDS Server 2023-02-02T06:48:37.441093Z info Opening status port 15020 2023-02-02T06:48:37.441115Z info CA Endpoint istiod.istio-system.svc:15012, provider Citadel 2023-02-02T06:48:37.441148Z info Using CA istiod.istio-system.svc:15012 cert with certs: var/run/secrets/istio/root-cert.pem 2023-02-02T06:48:37.441258Z info citadelclient Citadel client using custom root cert: var/run/secrets/istio/root-cert.pem 2023-02-02T06:48:37.451079Z info ads All caches have been synced up in 117.214492ms, marking server ready 2023-02-02T06:48:37.451662Z info xdsproxy Initializing with upstream address “istiod.istio-system.svc:15012” and cluster “app2” 2023-02-02T06:48:37.452937Z info starting Http service at 127.0.0.1:15004 2023-02-02T06:48:37.453006Z info sds Starting SDS grpc server 2023-02-02T06:48:37.453799Z info Pilot SAN: [istiod.istio-system.svc] 2023-02-02T06:48:37.455270Z info Starting proxy agent 2023-02-02T06:48:37.455290Z info starting 2023-02-02T06:48:37.455305Z info Envoy command: [-c etc/istio/proxy/envoy-rev.json --drain-time-s 45 --drain-strategy immediate --parent-shutdown-time-s 60 --local-address-ip-version v4 --file-flush-interval-msec 1000 --disable-hot-restart --log-format %Y-%m-%dT%T.%fZ %l envoy %n %v -l warning --component-log-level misc:error --concurrency 2] 2023-02-02T06:48:37.755043Z info cache generated new workload certificate latency=302.039964ms ttl=23h59m59.244965151s 2023-02-02T06:48:37.755070Z info cache Root cert has changed, start rotating root cert 2023-02-02T06:48:37.755085Z info ads XDS: Incremental Pushing:0 ConnectedEndpoints:0 Version: 2023-02-02T06:48:37.755128Z info cache returned workload trust anchor from cache ttl=23h59m59.244875283s 2023-02-02T06:48:37.828272Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012 2023-02-02T06:48:37.848718Z info ads ADS: new connection for node:helloworld-v2-79bf565586-f44h7.sample-1 2023-02-02T06:48:37.848768Z info cache returned workload certificate from cache ttl=23h59m59.151236546s 2023-02-02T06:48:37.848791Z info ads ADS: new connection for node:helloworld-v2-79bf565586-f44h7.sample-2 2023-02-02T06:48:37.848882Z info cache returned workload trust anchor from cache ttl=23h59m59.1511224s 2023-02-02T06:48:37.848964Z info ads SDS: PUSH request for node:helloworld-v2-79bf565586-f44h7.sample resources:1 size:4.0kB resource:default 2023-02-02T06:48:37.849096Z info ads SDS: PUSH request for node:helloworld-v2-79bf565586-f44h7.sample resources:1 size:1.1kB resource:ROOTCA 2023-02-02T06:48:37.875942Z warning envoy config error adding listener: ‘192.168.2.127_9100’ has duplicate address ‘192.168.2.127:9100,[2001:2:127:1::ab06]:9100’ as existing listener 2023-02-02T06:48:37.910821Z warning envoy config gRPC config for type.googleapis.com/envoy.config.listener.v3.Listener rejected: Error adding/updating listener(s) 192.168.2.127_9100: error adding listener: ‘192.168.2.127_9100’ has duplicate address ‘192.168.2.127:9100,[2001:2:127:1::ab06]:9100’ as existing listener
2023-02-02T06:48:38.033260Z warn Envoy proxy is NOT ready: config received from XDS server, but was rejected: cds updates: 1 successful, 0 rejected; lds updates: 0 successful, 1 rejected 2023-02-02T06:48:39.042307Z warn Envoy proxy is NOT ready: config received from XDS server, but was rejected: cds updates: 1 successful, 0 rejected; lds updates: 0 successful, 1 rejected 2023-02-02T06:48:39.052383Z warn Envoy proxy is NOT ready: config received from XDS server, but was rejected: cds updates: 1 successful, 0 rejected; lds updates: 0 successful, 1 rejected 2023-02-02T06:48:41.052128Z warn Envoy proxy is NOT ready: config received from XDS server, but was rejected: cds updates: 1 successful, 0 rejected; lds updates: 0 successful, 1 rejected 2023-02-02T06:48:43.052502Z warn Envoy proxy is NOT ready: config received from XDS server, but was rejected: cds updates: 1 successful, 0 rejected; lds updates: 0 successful, 1 rejected 2023-02-02T06:48:45.052379Z warn Envoy proxy is NOT ready: config received from XDS server, but was rejected: cds updates: 1 successful, 0 rejected; lds updates: 0 successful, 1 rejected
- yes, I deployed one prometheus exporter service, it used port 9100. service dump as below: apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: “true” creationTimestamp: “2023-02-02T06:18:37Z” labels: kubernetes.io/name: NodeExporter name: node-exporter namespace: prometheus-node-exporter resourceVersion: “822” uid: 5fe250ab-6896-4416-bf1f-796251d2aad5 spec: clusterIP: None clusterIPs:
- None internalTrafficPolicy: Cluster ipFamilies:
- IPv4
- IPv6 ipFamilyPolicy: PreferDualStack ports:
- name: metrics port: 9100 protocol: TCP targetPort: 9100 selector: app: prometheus module: node-exporter sessionAffinity: None type: ClusterIP status: loadBalancer: {}
- istiod also reported below error: 2023-02-02T06:48:26.853336Z info ads RDS: PUSH for node:helloworld-v1-77cb56d4b4-l9lzr.sample resources:18 size:10.1kB cached:16/18 2023-02-02T06:48:35.024844Z info Sidecar injection request for sample/helloworld-v2-79bf565586-***** (actual name not yet known) 2023-02-02T06:48:35.058683Z info ads Incremental push, service helloworld.sample.svc.cluster.local at shard Kubernetes/app2 has no endpoints 2023-02-02T06:48:35.159740Z info ads Push debounce stable[19] 1 for config ServiceEntry/sample/helloworld.sample.svc.cluster.local: 101.008667ms since last change, 101.008634ms since last push, full=false 2023-02-02T06:48:35.159775Z info ads XDS: Incremental Pushing:2023-02-02T06:48:26Z/9 ConnectedEndpoints:3 Version:2023-02-02T06:48:26Z/9 2023-02-02T06:48:37.035419Z info ads Incremental push, service helloworld.sample.svc.cluster.local at shard Kubernetes/app2 has no endpoints 2023-02-02T06:48:37.135702Z info ads Push debounce stable[20] 1 for config ServiceEntry/sample/helloworld.sample.svc.cluster.local: 100.239815ms since last change, 100.23966ms since last push, full=false 2023-02-02T06:48:37.135786Z info ads XDS: Incremental Pushing:2023-02-02T06:48:26Z/9 ConnectedEndpoints:3 Version:2023-02-02T06:48:26Z/9 2023-02-02T06:48:37.830452Z info ads ADS: new connection for node:helloworld-v2-79bf565586-f44h7.sample-4 2023-02-02T06:48:37.831354Z info ads CDS: PUSH request for node:helloworld-v2-79bf565586-f44h7.sample resources:38 size:38.1kB cached:0/33 2023-02-02T06:48:37.846193Z info ads EDS: PUSH request for node:helloworld-v2-79bf565586-f44h7.sample resources:32 size:8.7kB empty:0 cached:32/32 2023-02-02T06:48:37.859465Z info ads LDS: PUSH request for node:helloworld-v2-79bf565586-f44h7.sample resources:36 size:99.0kB 2023-02-02T06:48:37.911556Z info ads RDS: PUSH request for node:helloworld-v2-79bf565586-f44h7.sample resources:18 size:10.1kB cached:0/18 2023-02-02T06:48:37.911578Z warn ads ADS:LDS: ACK ERROR helloworld-v2-79bf565586-f44h7.sample-4 Internal:Error adding/updating listener(s) 192.168.2.127_9100: error adding listener: ‘192.168.2.127_9100’ has duplicate address ‘192.168.2.127:9100,[2001:2:127:1::ab06]:9100’ as existing listener
Version
istioctl version:
client version: 1.16.0
control plane version: 1.16.2
kubernetes version: v1.24.0
Additional Information
No response
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 16 (9 by maintainers)
@zhlsunshine Unfortunatly, it does not fix my issue. Below is the silimar output for get endpoints: apiVersion: v1 items:
BTW: If the clusterIP of service node-exporter is not set as “None”, it’s OK. no duplicate address error.
I tried another scenario to produce similar issue. This time the conflict IPv6 address is from another service.
2023-02-06T09:58:45.514528Z info ads EDS: PUSH request for node:istio-eastwestgateway-85b6d4f75b-g2h9k.istio-system resources:2 size:386B empty:0 cached:0/2 filtered:32