istio: the Pod in Mesh to VM : TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER

Bug Description

Mesh -> VM:

the log of bash: bash-5.1# curl helloworld.vm-test.svc.cluster.local:8500/hello upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBERbash-5.1#

1 size:1.1kB resource:ROOTCA
2021-11-03T13:13:26.229375Z	info	cache	returned workload trust anchor from cache	ttl=23h59m59.770628192s
2021-11-03T13:13:26.229406Z	info	ads	SDS: PUSH for node:network-multitool-5cb859cc-5d9nb.vm-test resources:1 size:1.1kB resource:ROOTCA
2021-11-03T13:13:27.119064Z	info	Initialization took 1.227850313s
2021-11-03T13:13:27.119104Z	info	Envoy proxy is ready
[2021-11-03T13:14:44.157Z] "GET /hello HTTP/1.1" 503 UF,URX upstream_reset_before_response_started{connection_failure,TLS_error:_268435703:SSL_routines:OPENSSL_internal:WRONG_VERSION_NUMBER} - "TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER" 0 190 39 - "-" "curl/7.79.1" "b257df75-408b-9715-a253-b4c62e9ab0a4" "helloworld.vm-test.svc.cluster.local:8500" "192.168.0.6:8500" outbound|8500||helloworld.vm-test.svc.cluster.local - 10.254.123.40:8500 172.16.0.58:53886 - default

Traffic in this direction failed.

2021-11-03T11:49:47.337593Z	debug	envoy pool	creating a new connection
2021-11-03T11:49:47.337661Z	debug	envoy client	[C3756] connecting
2021-11-03T11:49:47.337670Z	debug	envoy connection	[C3756] connecting to 192.168.0.6:8500
2021-11-03T11:49:47.337739Z	debug	envoy connection	[C3756] connection in progress
2021-11-03T11:49:47.337751Z	trace	envoy pool	not creating a new connection, shouldCreateNewConnection returned false.
2021-11-03T11:49:47.338220Z	trace	envoy connection	[C3756] socket event: 2
2021-11-03T11:49:47.338233Z	trace	envoy connection	[C3756] write ready
2021-11-03T11:49:47.338238Z	debug	envoy connection	[C3756] connected
2021-11-03T11:49:47.338298Z	trace	envoy connection	[C3756] ssl error occurred while read: WANT_READ
2021-11-03T11:49:47.339134Z	trace	envoy connection	[C3756] socket event: 3
2021-11-03T11:49:47.339154Z	trace	envoy connection	[C3756] write ready
2021-11-03T11:49:47.339208Z	trace	envoy connection	[C3756] ssl error occurred while read: SSL
2021-11-03T11:49:47.339217Z	debug	envoy connection	[C3756] TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
2021-11-03T11:49:47.339220Z	debug	envoy connection	[C3756] closing socket: 0
2021-11-03T11:49:47.339237Z	debug	envoy connection	[C3756] TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
2021-11-03T11:49:47.339272Z	trace	envoy connection	[C3756] raising connection event 0
2021-11-03T11:49:47.339282Z	debug	envoy client	[C3756] disconnect. resetting 0 pending requests
2021-11-03T11:49:47.339288Z	debug	envoy pool	[C3756] client disconnected, failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
2021-11-03T11:49:47.339301Z	debug	envoy router	[C3753][S10717251153181087726] upstream reset: reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
2021-11-03T11:49:47.339366Z	debug	envoy http	[C3753][S10717251153181087726] Sending local reply with details upstream_reset_before_response_started{connection failure,TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER}
2021-11-03T11:49:47.339397Z	trace	envoy http	[C3753][S10717251153181087726] encode headers called: filter=0x55678b737ce0 status=0
2021-11-03T11:49:47.339405Z	trace	envoy http	[C3753][S10717251153181087726] encode headers called: filter=0x55678b533490 status=0
2021-11-03T11:49:47.339408Z	trace	envoy http	[C3753][S10717251153181087726] encode headers called: filter=0x55678b97d650 status=0
2021-11-03T11:49:47.339431Z	trace	envoy http	[C3753][S10717251153181087726] encode headers called: filter=0x55678b585ce0 status=0
2021-11-03T11:49:47.339460Z	debug	envoy http	[C3753][S10717251153181087726] encoding headers via codec (end_stream=false):
':status', '503'
'content-length', '190'
'content-type', 'text/plain'
'date', 'Wed, 03 Nov 2021 11:49:47 GMT'
'server', 'envoy'

2021-11-03T11:49:47.339476Z	trace	envoy connection	[C3753] writing 135 bytes, end_stream false
2021-11-03T11:49:47.339488Z	trace	envoy http	[C3753][S10717251153181087726] encode data called: filter=0x55678b737ce0 status=0
2021-11-03T11:49:47.339491Z	trace	envoy http	[C3753][S10717251153181087726] encode data called: filter=0x55678b533490 status=0
2021-11-03T11:49:47.339548Z	trace	envoy http	[C3753][S10717251153181087726] encode data called: filter=0x55678b97d650 status=0
2021-11-03T11:49:47.339552Z	trace	envoy http	[C3753][S10717251153181087726] encode data called: filter=0x55678b585ce0 status=0
2021-11-03T11:49:47.339556Z	trace	envoy http	[C3753][S10717251153181087726] encoding data via codec (size=190 end_stream=true)
2021-11-03T11:49:47.339564Z	trace	envoy connection	[C3753] writing 190 bytes, end_stream false
2021-11-03T11:49:47.339571Z	trace	envoy connection	[C3753] readDisable: disable=false disable_count=1 state=0 buffer_length=0
2021-11-03T11:49:47.339802Z	debug	envoy wasm	wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:622]::report() metricKey cache hit , stat=32
2021-11-03T11:49:47.339821Z	debug	envoy wasm	wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:622]::report() metricKey cache hit , stat=66
2021-11-03T11:49:47.339824Z	debug	envoy wasm	wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:622]::report() metricKey cache hit , stat=70
2021-11-03T11:49:47.339828Z	debug	envoy wasm	wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:622]::report() metricKey cache hit , stat=74
2021-11-03T11:49:47.339836Z	trace	envoy main	item added to deferred deletion list (size=1)
2021-11-03T11:49:47.339842Z	trace	envoy misc	enableTimer called on 0x55678b9d7080 for 3600000ms, min is 3600000ms
2021-11-03T11:49:47.339852Z	trace	envoy pool	not creating a new connection, shouldCreateNewConnection returned false.
2021-11-03T11:49:47.339864Z	trace	envoy main	item added to deferred deletion list (size=2)
2021-11-03T11:49:47.339869Z	trace	envoy main	item added to deferred deletion list (size=3)
2021-11-03T11:49:47.339874Z	trace	envoy main	clearing deferred deletion list (size=3)
2021-11-03T11:49:47.339990Z	trace	envoy connection	[C3753] socket event: 2
2021-11-03T11:49:47.340000Z	trace	envoy connection	[C3753] write ready
2021-11-03T11:49:47.340046Z	trace	envoy connection	[C3753] write returns: 325
2021-11-03T11:49:47.408932Z	trace	envoy connection	[C3753] socket event: 3
2021-11-03T11:49:47.408966Z	trace	envoy connection	[C3753] write ready
2021-11-03T11:49:47.408976Z	trace	envoy connection	[C3753] read ready. dispatch_buffered_data=false
2021-11-03T11:49:47.409029Z	trace	envoy connection	[C3753] read returns: 0
2021-11-03T11:49:47.409034Z	debug	envoy connection	[C3753] remote close
2021-11-03T11:49:47.409036Z	debug	envoy connection	[C3753] closing socket: 0
2021-11-03T11:49:47.409100Z	trace	envoy connection	[C3753] raising connection event 0
2021-11-03T11:49:47.409122Z	debug	envoy conn_handler	[C3753] adding to cleanup list
2021-11-03T11:49:47.409128Z	trace	envoy main	item added to deferred deletion list (size=1)
2021-11-03T11:49:47.409131Z	trace	envoy main	item added to deferred deletion list (size=2)
2021-11-03T11:49:47.409136Z	trace	envoy main	clearing deferred deletion list (size=2)
[2021-11-03T11:49:47.309Z] "GET /hello HTTP/1.1" 503 UF,URX upstream_reset_before_response_started{connection_failure,TLS_error:_268435703:SSL_routines:OPENSSL_internal:WRONG_VERSION_NUMBER} - "TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER" 0 190 30 - "-" "curl/7.79.1" "0723f47d-22f0-9914-ab3f-df51cc68910b" "helloworld.vm-test.svc.cluster.local:8500" "192.168.0.6:8500" outbound|8500||helloworld.vm-test.svc.cluster.local - 10.254.123.40:8500 172.16.1.80:38634 - default
2021-11-03T11:49:48.055553Z	debug	envoy main	flushing stats
2021-11-03T11:49:48.089899Z	trace	envoy misc	enableTimer called on 0x55678b99b700 for 3600000ms, min is 3600000ms
2021-11-03T11:49:48.089937Z	debug	envoy conn_handler	[C3757] new connection from 192.168.0.5:49458
2021-11-03T11:49:48.089959Z	trace	envoy connection	[C3757] socket event: 3
2021-11-03T11:49:48.089964Z	trace	envoy connection	[C3757] write ready
2021-11-03T11:49:48.089969Z	trace	envoy connection	[C3757] read ready. dispatch_buffered_data=false
2021-11-03T11:49:48.089986Z	trace	envoy connection	[C3757] read returns: 117
2021-11-03T11:49:48.090000Z	trace	envoy connection	[C3757] read error: Resource temporarily unavailable
2021-11-03T11:49:48.090023Z	trace	envoy http	[C3757] parsing 117 bytes
2021-11-03T11:49:48.090035Z	trace	envoy http	[C3757] message begin
2021-11-03T11:49:48.090045Z	debug	envoy http	[C3757] new stream
2021-11-03T11:49:48.090064Z	trace	envoy misc	enableTimer called on 0x55678b669100 for 300000ms, min is 300000ms
2021-11-03T11:49:48.090085Z	trace	envoy http	[C3757] completed header: key=Host value=172.16.1.80:15021
2021-11-03T11:49:48.090101Z	trace	envoy http	[C3757] completed header: key=User-Agent value=kube-probe/1.20
2021-11-03T11:49:48.090113Z	trace	envoy http	[C3757] completed header: key=Accept value=*/*
2021-11-03T11:49:48.090127Z	trace	envoy http	[C3757] onHeadersCompleteBase
2021-11-03T11:49:48.090131Z	trace	envoy http	[C3757] completed header: key=Connection value=close
2021-11-03T11:49:48.090142Z	trace	envoy http	[C3757] Server: onHeadersComplete size=4
2021-11-03T11:49:48.090159Z	trace	envoy http	[C3757] message complete
2021-11-03T11:49:48.090170Z	trace	envoy connection	[C3757] readDisable: disable=true disable_count=0 state=0 buffer_length=117
2021-11-03T11:49:48.090197Z	debug	envoy http	[C3757][S4433421122818485152] request headers complete (end_stream=true):
':authority', '172.16.1.80:15021'
':path', '/healthz/ready'
':method', 'GET'
'user-agent', 'kube-probe/1.20'
'accept', '*/*'
'connection', 'close'

2021-11-03T11:49:48.090206Z	debug	envoy http	[C3757][S4433421122818485152] request end stream
2021-11-03T11:49:48.090263Z	debug	envoy router	[C3757][S4433421122818485152] cluster 'agent' match for URL '/healthz/ready'
2021-11-03T11:49:48.090322Z	debug	envoy router	[C3757][S4433421122818485152] router decoding headers:
':authority', '172.16.1.80:15021'
':path', '/healthz/ready'
':method', 'GET'
':scheme', 'http'
'user-agent', 'kube-probe/1.20'
'accept', '*/*'
'x-forwarded-proto', 'http'
'x-request-id', 'e2589795-9504-42a0-b219-c04bb3ab1dd1'
'x-envoy-expected-rq-timeout-ms', '15000'

2021-11-03T11:49:48.090400Z	debug	envoy pool	[C3] using existing connection
2021-11-03T11:49:48.090406Z	debug	envoy pool	[C3] creating stream
2021-11-03T11:49:48.090421Z	debug	envoy router	[C3757][S4433421122818485152] pool ready
2021-11-03T11:49:48.090451Z	trace	envoy connection	[C3] writing 214 bytes, end_stream false
2021-11-03T11:49:48.090465Z	trace	envoy pool	not creating a new connection, shouldCreateNewConnection returned false.
2021-11-03T11:49:48.090474Z	trace	envoy http	[C3757][S4433421122818485152] decode headers called: filter=0x55678b990bd0 status=1
2021-11-03T11:49:48.090482Z	trace	envoy misc	enableTimer called on 0x55678b669100 for 300000ms, min is 300000ms
2021-11-03T11:49:48.090495Z	trace	envoy http	[C3757] parsed 117 bytes
2021-11-03T11:49:48.090551Z	trace	envoy connection	[C3757] socket event: 2
2021-11-03T11:49:48.090555Z	trace	envoy connection	[C3757] write ready
2021-11-03T11:49:48.090561Z	trace	envoy connection	[C3] socket event: 2
2021-11-03T11:49:48.090563Z	trace	envoy connection	[C3] write ready
2021-11-03T11:49:48.090615Z	trace	envoy connection	[C3] write returns: 214
2021-11-03T11:49:48.090826Z	trace	envoy connection	[C3] socket event: 3
2021-11-03T11:49:48.090836Z	trace	envoy connection	[C3] write ready
2021-11-03T11:49:48.090841Z	trace	envoy connection	[C3] read ready. dispatch_buffered_data=false
2021-11-03T11:49:48.090855Z	trace	envoy connection	[C3] read returns: 75
2021-11-03T11:49:48.090866Z	trace	envoy connection	[C3] read error: Resource temporarily unavailable
2021-11-03T11:49:48.090875Z	trace	envoy http	[C3] parsing 75 bytes
2021-11-03T11:49:48.090881Z	trace	envoy http	[C3] message begin
2021-11-03T11:49:48.090902Z	trace	envoy http	[C3] completed header: key=Date value=Wed, 03 Nov 2021 11:49:48 GMT
2021-11-03T11:49:48.090915Z	trace	envoy http	[C3] onHeadersCompleteBase
2021-11-03T11:49:48.090918Z	trace	envoy http	[C3] completed header: key=Content-Length value=0
2021-11-03T11:49:48.090928Z	trace	envoy http	[C3] status_code 200
2021-11-03T11:49:48.090934Z	trace	envoy http	[C3] Client: onHeadersComplete size=2
2021-11-03T11:49:48.090941Z	trace	envoy http	[C3] message complete
2021-11-03T11:49:48.090949Z	trace	envoy http	[C3] message complete
2021-11-03T11:49:48.090953Z	debug	envoy client	[C3] response complete
2021-11-03T11:49:48.090958Z	trace	envoy main	item added to deferred deletion list (size=1)
2021-11-03T11:49:48.090972Z	debug	envoy router	[C3757][S4433421122818485152] upstream headers complete: end_stream=true
2021-11-03T11:49:48.091027Z	trace	envoy misc	enableTimer called on 0x55678b669100 for 300000ms, min is 300000ms
2021-11-03T11:49:48.091049Z	debug	envoy http	[C3757][S4433421122818485152] closing connection due to connection close header
2021-11-03T11:49:48.091069Z	debug	envoy http	[C3757][S4433421122818485152] encoding headers via codec (end_stream=true):
':status', '200'
'date', 'Wed, 03 Nov 2021 11:49:48 GMT'
'content-length', '0'
'x-envoy-upstream-service-time', '0'
'server', 'envoy'
'connection', 'close'

VM -> Mesh: This is correct

[root@instance-6dcpbbai vm]#  curl httpbin.vm-test:8000/headers
{
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.vm-test:8000",
    "User-Agent": "curl/7.61.1",
    "X-B3-Parentspanid": "fbf55b25584edfae",
    "X-B3-Sampled": "0",
    "X-B3-Spanid": "eb8bd54b7d469867",
    "X-B3-Traceid": "c348400f3dd4a671fbf55b25584edfae",
    "X-Envoy-Attempt-Count": "1",
    "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/vm-test/sa/httpbin;Hash=17423b675afa7ed8951d4e76371739d61d3520a2ce29942b179919bbfec3fdd5;Subject=\"\";URI=spiffe://cluster.local/ns/vm-test/sa/vm"
  }
}

Version

➜  debug-istio-vm istioctl version
client version: 1.11.4
control plane version: 1.11.4
data plane version: 1.11.4 (9 proxies), 1.11.0 (1 proxies)

➜  debug-istio-vm kubectl version --short
Client Version: v1.21.2
Server Version: v1.20.8

Additional Information

the yaml se of we:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: vm-helloworld
  labels:
    app: vm-test
spec:
  hosts:
  - helloworld.vm-test.svc.cluster.local
  location: MESH_INTERNAL
  addresses:
  - 10.254.123.40
  ports:
  - name: http-8500
    number: 8500
    protocol: HTTP
    targetPort: 8500
  resolution: DNS
  workloadSelector:
    labels:
      app: vm-test
---
apiVersion: networking.istio.io/v1beta1
kind: WorkloadEntry
metadata:
  labels:
    app: vm-test
  name: vm-test-192.168.0.6
spec:
  address: 192.168.0.6
  labels:
    app: vm-test
  serviceAccount: vm
[root@instance-6dcpbbai vm]# curl localhost:15000/clusters | grep helloworld
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 99337    0 99337    0     0  47.3M      0 --:--outbound|8500||helloworld.vm-test.svc.cluster.local::observability_name::outbound|8500||helloworld.vm-test.svc.cluster.local
:-outbound|8500||helloworld.vm-test.svc.cluster.local::default_priority::max_connections::4294967295
-outbound|8500||helloworld.vm-test.svc.cluster.local::default_priority::max_pending_requests::4294967295
 outbound|8500||helloworld.vm-test.svc.cluster.local::default_priority::max_requests::4294967295
-outbound|8500||helloworld.vm-test.svc.cluster.local::default_priority::max_retries::4294967295
-:outbound|8500||helloworld.vm-test.svc.cluster.local::high_priority::max_connections::1024
-outbound|8500||helloworld.vm-test.svc.cluster.local::high_priority::max_pending_requests::1024
-:outbound|8500||helloworld.vm-test.svc.cluster.local::high_priority::max_requests::1024
-- --:--:-- 4outbound|8500||helloworld.vm-test.svc.cluster.local::high_priority::max_retries::3
7outbound|8500||helloworld.vm-test.svc.cluster.local::added_via_api::true
.outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::cx_active::0
3outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::cx_connect_fail::0
Moutbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::cx_total::0

outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::rq_active::0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::rq_error::0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::rq_success::0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::rq_timeout::0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::rq_total::0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::hostname::192.168.0.6
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::health_flags::healthy
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::weight::1
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::region::
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::zone::
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::sub_zone::
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::canary::false
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::priority::0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::success_rate::-1.0
outbound|8500||helloworld.vm-test.svc.cluster.local::192.168.0.6:8500::local_origin_success_rate::-1.0

I can access to it by ip + port:

➜  debug-istio-vm k -n vm-test exec -it network-multitool-5cb859cc-5d9nb bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-5.1# curl helloworld.vm-test.svc.cluster.local:8500/hello
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBERbash-5.1#
bash-5.1# curl helloworld.vm-test.svc.cluster.local:8500/hello
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBERbash-5.1# curl helloworld.vm-test.svc.cluster.local:8500/hello^C
bash-5.1# curl 192.168.0.6:8500/hello
Hello version: v2, instance: 07c4c3d7b486
bash-5.1# curl 192.168.0.6:8500/hello
Hello version: v2, instance: 07c4c3d7b486
[root@instance-6dcpbbai vm]# docker ps
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
CONTAINER ID  IMAGE                                          COMMAND               CREATED      STATUS          PORTS                   NAMES
07c4c3d7b486  docker.io/istio/examples-helloworld-v2:latest  /bin/sh -c python...  3 hours ago  Up 3 hours ago  0.0.0.0:8500->5000/tcp  beautiful_hypatia
[root@instance-6dcpbbai vm]# curl 192.168.0.6:8500/hello
Hello version: v2, instance: 07c4c3d7b486

the config of vm: cluster.env

BOOTSTRAP_XDS_AGENT='true'
CANONICAL_REVISION='latest'
CANONICAL_SERVICE='vm-helloworld-v2'
ISTIO_INBOUND_PORTS='*'
ISTIO_LOCAL_EXCLUDE_PORTS='15090,15021,15020,8022,22'
ISTIO_METAJSON_LABELS='{"app":"vm-helloworld-v2","service.istio.io/canonical-name":"vm-helloworld-v2","service.istio.io/canonical-version":"latest"}'
ISTIO_META_CLUSTER_ID='Kubernetes'
ISTIO_META_DNS_CAPTURE='true'
ISTIO_META_MESH_ID='mesh1'
ISTIO_META_NETWORK=''
ISTIO_META_WORKLOAD_NAME='vm-helloworld-v2'
ISTIO_NAMESPACE='vm-test'
ISTIO_SERVICE='vm-helloworld-v2.vm-test'
ISTIO_SERVICE_CIDR='*'
POD_NAMESPACE='vm-test'
SERVICE_ACCOUNT='vm'
TRUST_DOMAIN='cluster.local'

istio-token

eyJhbGciOiJSUzI1NiIsImtpZCI6IkFXQWEyN3AzNmRkN2R5R2pxNUFVZVVHWGJPdkpEclk4QnNrdWRScjN2T0kifQ.eyJhdWQiOlsiaXN0aW8tY2EiXSwiZXhwIjoxNjY3NDY3NzYwLCJpYXQiOjE2MzU5MzE3NjAsImlzcyI6Imt1YmVybmV0ZXMuZGVmYXVsdC5zdmMiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6InZtLXRlc3QiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidm0iLCJ1aWQiOiJjMWU0ZTYyNC0zN2E5LTQwMmQtODNjZi1kMTI3MDgzNWU3NTIifX0sIm5iZiI6MTYzNTkzMTc2MCwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnZtLXRlc3Q6dm0ifQ.RT3A2hqsnlkXvfrEMldKuEQab70AtS2ELez2DTTcU87fcgyTm0F9RenJoBGk2PwaZ4E2YbSSkuVuie6eQaRohcmILDKzGJ6U0GxQbea0pzghDVjzw8VMwoVi7pn3xXsucrHJhZJGceGELeCYEyWeAJe7JlkS43P7QYJOv535PbT9Vb5sPrvN_pnRruSDWTi_aY2WCMrz-0EDh2430TCV3tClwsWRAfe8CyoC6u--NJ9yYfhKdS-eU9-gIfpSR9LQWg6J0U1GEyCAMDJ2299uxJcJKOtGo9RCzvjQK2JT3CwbmdnNxC6EBrQJc6Iw6wZFQhBxm3MgesCk5Bw7xOg1qQ

mesh.yaml:

defaultConfig:
  discoveryAddress: istiod.istio-system.svc:15012
  meshId: mesh1
  proxyMetadata:
    BOOTSTRAP_XDS_AGENT: "true"
    CANONICAL_REVISION: latest
    CANONICAL_SERVICE: vm-helloworld-v2
    ISTIO_META_CLUSTER_ID: Kubernetes
    ISTIO_META_DNS_CAPTURE: "true"
    ISTIO_META_MESH_ID: mesh1
    ISTIO_META_NETWORK: ""
    ISTIO_META_WORKLOAD_NAME: vm-helloworld-v2
    ISTIO_METAJSON_LABELS: '{"app":"vm-helloworld-v2","service.istio.io/canonical-name":"vm-helloworld-v2","service.istio.io/canonical-version":"latest"}'
    POD_NAMESPACE: vm-test
    SERVICE_ACCOUNT: vm
    TRUST_DOMAIN: cluster.local
  tracing:
    zipkin:
      address: zipkin.istio-system:9411

root-cert.pem:

-----BEGIN CERTIFICATE-----
MIIC/TCCAeWgAwIBAgIRAJYeRT1UAkIe9yZlsr5ruvIwDQYJKoZIhvcNAQELBQAw
GDEWMBQGA1UEChMNY2x1c3Rlci5sb2NhbDAeFw0yMTExMDMwOTI1NTNaFw0zMTEx
MDEwOTI1NTNaMBgxFjAUBgNVBAoTDWNsdXN0ZXIubG9jYWwwggEiMA0GCSqGSIb3
DQEBAQUAA4IBDwAwggEKAoIBAQDFhg29rwZG0hJ+QqokaPVZtkZdrZuPlSXqS7yN
4/49C7Yz/SvnudYeWOLNYapfnFH1IDS+0He4WSaNjaBa754sdmVfhcMReaiJ+kTX
rQhGRSmaPkx83Fga9eVP+I/X6Rn1Y3CbABXBDS80O/d3o7kSwKu+WUoGYhfxjTpJ
tGOy15bJP2PgDP8mUZMnSy20vCqJ8f7McXBrrAS+Hr5RvFKVwaO31ziN3yeXImJg
Jk+iy+o4IE3O6b7mec0WJsNrypjjUvsJVQCdVtar1CcMi3F84yt5XM2CG0FONIb4
ALl2xlhSjYFQOEyvG7TQj6hJHRoJAGtsc1kygTVKaHIBIQWFAgMBAAGjQjBAMA4G
A1UdDwEB/wQEAwICBDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRMbhEXuUwz
g3yYrZPB4CiPsiBjYzANBgkqhkiG9w0BAQsFAAOCAQEATHmRhyP3GG23riOIpLMm
IEYxPKzigYIwYEPAvERfJomywLAqvllDNGuIoNaXVaWxKKygPOowHgmZvJkTo80I
LxBH7gbSaleWsF8Tzj4SgDZLjhTO+R1d8L4g55mqeVf01etvqbnGAjbNJF/+GOox
pdE+nU7qg6Z97DuRAzKwnaBUOA6Opaz6/PwEkcNKJHHrr/zcVXYMH9PqUIiYG/cE
+gSMNCfP53Z5/9vEncHYfPlQndAEJdfYnwEMc7lGAz6E7PZ01EOw0XEdFQT1P6/D
I7DDcajxmieadtM36GXpriEuHKA83jNcdrB1QtjOaLMxVP1N/tEaUTyb9YCCza0x
lw==
-----END CERTIFICATE-----

the istio configmap:

apiVersion: v1
data:
  mesh: |-
    accessLogFile: /dev/stdout
    defaultConfig:
      discoveryAddress: istiod.istio-system.svc:15012
      meshId: mesh1
      proxyMetadata:
        BOOTSTRAP_XDS_AGENT: "true"
        ISTIO_META_DNS_CAPTURE: "true"
      tracing:
        sampling: 100
        zipkin:
          address: zipkin.istio-system:9411
    enablePrometheusMerge: true
    rootNamespace: istio-system
    trustDomain: cluster.local
  meshNetworks: 'networks: {}'
kind: ConfigMap
metadata:
  labels:
    install.operator.istio.io/owning-resource: istio
    install.operator.istio.io/owning-resource-namespace: istio-system
    istio.io/rev: default
    operator.istio.io/component: Pilot
    operator.istio.io/managed: Reconcile
    operator.istio.io/version: 1.11.4
    release: istio
  name: istio
  namespace: istio-system

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 40 (19 by maintainers)

Most upvoted comments

We had the same issue and setting the tls block with mode DISABLE fix the issue on our side. I have a question for @lxv458 and @Noksa about your istio configuration. Are you in strict or permissive mode ? On our side we are in permissive mode so maybe it’s a bug with permissive mode when tls block is not set

I receive the same error when two pods are on the same mesh but one of them has the following annotations (all traffic bypasses istio).

Error:

upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER

Annotations on one of pods:

traffic.sidecar.istio.io/includeInboundPorts: ""
traffic.sidecar.istio.io/includeOutboundPorts: ""

To solve this currently it is needed to create a DestinationRule as described above but in my case I’ve added this to a specific service:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: sbc
spec:
  host: sbc
  trafficPolicy:
    tls:
      mode: DISABLE

Is it correct behavior?

Looks like the mtls request is going directly to your app. Maybe iptables isn’t set up properly? On Thu, Nov 18, 2021 at 9:47 PM yuanxch @.***> wrote: same question, istio 1.11.4 cni:disable [image: image] https://user-images.githubusercontent.com/10543069/142571796-ca56cf88-e0d4-47ed-b0e2-82ecdcffe6fc.png http://10.98.41.167:31614/productpage upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER — You are receiving this because you were assigned. Reply to this email directly, view it on GitHub <#35870 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEYGXP3BXCZWKWYY3YADHTUMXQIFANCNFSM5HI5EPYA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

  trafficPolicy:
    tls:
      mode: DISABLE

It works when i add the codes into samples/bookinfo/networking/destination-rule-all.yaml

@howardjohn Maybe I know why. I find my cluster(istio version: 1.8.4-r2) has a Globally PeerAuthentication, the mode is DISABLE

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: istio-system
spec:
  mtls:
    mode: DISABLE

and the tls.mode set in DestinationRule of myapp is ISTIO_MUTUAL, the two configurations are inconsistent, and I get the ‘OPENSSL_internal:WRONG_VERSION_NUMBER’ error.

DR:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: myapp
  namespace: mynamespace
spec:
  host: myapp.mynamespace.svc.cluster.local
  subsets:
  - labels:
      version: v51777
    name: v51777
  - labels:
      version: 20220208091536-v26641
    name: 20220208091536-v26641
  trafficPolicy:
    connectionPool:
      http:
        idleTimeout: 15s
      tcp:
        maxConnections: 2048
    loadBalancer:
      simple: ROUND_ROBIN
    tls:
      mode: ISTIO_MUTUAL

From all your info, i think --net host is a requirement

This also affects the instructions for running prometheus in a (mostly) strict mesh: https://istio.io/latest/docs/ops/integrations/prometheus/

Grafana and Prometheus are running with sidecars using the above configurations and grafana is unable to talk to prometheus. Additionally, we’re unable to route to it from a gateway using a virtualservice. Both scenarios get the ssl wrong version error.