istio: upstream connect error or disconnect/reset before headers. reset reason: local reset, transport failure reason: TLS error: 268435612:SSL routines:OPENSSL_internal:HTTP_REQUEST

(NOTE: This is used to report product bugs: To report a security vulnerability, please visit https://istio.io/about/security-vulnerabilities To ask questions about how to use Istio, please visit https://discuss.istio.io)

Bug description

[ ] Docs [ ] Installation [X] Networking [ ] Performance and Scalability [ ] Extensions and Telemetry [ ] Security [X] Test and Release [ ] User Experience [ ] Developer Infrastructure [ ] Upgrade

Expected behavior

Steps to reproduce the bug

Version (include the output of istioctl version --remote and kubectl version --short and helm version --short if you used Helm)

[root@master01 ~]# istioctl version --remote
client version: 1.9.3
control plane version: 1.9.3
data plane version: 1.9.3 (3 proxies)


[root@master01 ~]# kubectl version --short
Client Version: v1.20.4
Server Version: v1.20.4

How was Istio installed?

Environment where the bug was observed (cloud vendor, OS, etc)

Additionally, please consider running istioctl bug-report and attach the generated cluster-state tarball to this issue. Refer cluster state archive for more details.




Deployment Istio operator

istioctl operator init

istioctl --context="CTX_CLUSTER1" operator init

istioctl --context="CTX_CLUSTER2" operator init




Use the certificate provided by istio by default
kubectl --context="CTX_CLUSTER1" create ns istio-system

kubectl --context="CTX_CLUSTER1" create secret generic cacerts -n istio-system \
        --from-file=istio-1.9.3/samples/certs/ca-cert.pem \
        --from-file=istio-1.9.3/samples/certs/ca-key.pem \
        --from-file=istio-1.9.3/samples/certs/root-cert.pem \
        --from-file=istio-1.9.3/samples/certs/cert-chain.pem


kubectl --context="CTX_CLUSTER2" create ns istio-system

kubectl --context="CTX_CLUSTER2" create secret generic cacerts -n istio-system \
        --from-file=istio-1.9.3/samples/certs/ca-cert.pem \
        --from-file=istio-1.9.3/samples/certs/ca-key.pem \
        --from-file=istio-1.9.3/samples/certs/root-cert.pem \
        --from-file=istio-1.9.3/samples/certs/cert-chain.pem




Set cluster1 as the primary cluster
Create an Istio configuration file for cluster1
cat> cluster1.yaml << ERIC
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: cluster1
      network: network1
ERIC

Apply the configuration file to cluster1
istioctl --context="CTX_CLUSTER1" install -f cluster1.yaml

kubectl --context="CTX_CLUSTER1" get svc,po -n istio-system

By checking the operator controller log, you can detect the changes made by the controller in the cluster in response to the update of the IstioOperator CR
kubectl logs -f -n istio-operator $(kubectl get pods -n istio-operator -lname=istio-operator -o jsonpath='{.items[0].metadata.name}')




Official Document Multi-Cluster Installation

Set cluster2 as the primary cluster
Create an Istio configuration file for cluster2
cat> cluster2.yaml << ERIC
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: cluster2
      network: network1
ERIC

Apply the configuration file to cluster2
istioctl --context="CTX_CLUSTER2" install -f cluster2.yaml

kubectl --context="CTX_CLUSTER2" get svc,po -n istio-system



Turn on endpoint discovery, no need to switch k8s context, the following command will automatically select the context for creation
## Install the secret of the remote cluster in cluster2, which provides access to the API server of the cluster1 cluster.
[root@master01 ~]# istioctl x create-remote-secret \
    --context="CTX_CLUSTER1" \
    --name=cluster1 | \
    kubectl apply -f---context="CTX_CLUSTER2"

## View
[root@master01 ~]# kubectl get secret -n istio-system --context="CTX_CLUSTER2" | grep istio-remote
istio-remote-secret-cluster1 Opaque 1 7s


## ------------------------------------------------ -------------------------------------------------


## Install the secret of the remote cluster in cluster1, which provides access to the API server of cluster2.
[root@master01 ~]# istioctl x create-remote-secret \
    --context="CTX_CLUSTER2" \
    --name=cluster2 | \
    kubectl apply -f---context="CTX_CLUSTER1"

## View
[root@master01 ~]# kubectl get secret -n istio-system --context="CTX_CLUSTER1" | grep istio-remote
istio-remote-secret-cluster2 Opaque 1 5s










Verification
First, create a namespace sample in each cluster
kubectl create --context="CTX_CLUSTER1" namespace sample
kubectl create --context="CTX_CLUSTER2" namespace sample


Enable automatic sidecar injection for the namespace sample
kubectl label --context="CTX_CLUSTER1" namespace sample istio-injection=enabled
kubectl label --context="CTX_CLUSTER2" namespace sample istio-injection=enabled


Create HelloWorld Service in each cluster
kubectl apply --context="CTX_CLUSTER1" \
    -f istio-1.9.3/samples/helloworld/helloworld.yaml \
    -l service=helloworld -n sample

kubectl apply --context="CTX_CLUSTER2" \
    -f istio-1.9.3/samples/helloworld/helloworld.yaml \
    -l service=helloworld -n sample




Deploy the V1 version of HelloWorld
kubectl apply --context="CTX_CLUSTER1" \
    -f istio-1.9.3/samples/helloworld/helloworld.yaml \
    -l version=v1 -n sample


Deploy the V2 version of HelloWorld
kubectl apply --context="CTX_CLUSTER2" \
    -f istio-1.9.3/samples/helloworld/helloworld.yaml \
    -l version=v2 -n sample




Deploy Sleep
kubectl apply --context="CTX_CLUSTER1" \
    -f istio-1.9.3/samples/sleep/sleep.yaml -n sample

kubectl apply --context="CTX_CLUSTER2" \
    -f istio-1.9.3/samples/sleep/sleep.yaml -n sample




Verify cross-cluster traffic

Send a request from the Sleep pod in cluster1 to the service HelloWorld

kubectl exec --context="CTX_CLUSTER1" -n sample -c sleep \
    "$(kubectl get pod --context="CTX_CLUSTER1" -n sample -l \
    app=sleep -o jsonpath='{.items[0].metadata.name}')" \
    - curl helloworld.sample:5000/hello

Repeat this process with the Sleep pod in cluster2
kubectl exec --context="CTX_CLUSTER2" -n sample -c sleep \
     "$(kubectl get pod --context="CTX_CLUSTER2" -n sample -l \
     app=sleep -o jsonpath='{.items[0].metadata.name}')" \
     - curl helloworld.sample:5000/hello




Test
[root@master01 ~]# kubectl exec --context="CTX_CLUSTER1" -n sample -c sleep \
>     "$(kubectl get pod --context="CTX_CLUSTER1" -n sample -l \
>     app=sleep -o jsonpath='{.items[0].metadata.name}')" \
>     -- curl helloworld.sample:5000/hello
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    60  100    60    0     0    431      0 --:--:-- --:--:-- --:--:--   434
Hello version: v1, instance: helloworld-v1-776f57d5f6-zgbmz
[root@master01 ~]#
[root@master01 ~]#
[root@master01 ~]# kubectl exec --context="CTX_CLUSTER1" -n sample -c sleep \
>     "$(kubectl get pod --context="CTX_CLUSTER1" -n sample -l \
>     app=sleep -o jsonpath='{.items[0].metadata.name}')" \
>     -- curl helloworld.sample:5000/hello
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0upstream connect error or disconnect/reset before headers. reset reason: local reset, transport failure reason: TLS100   175  100   175    0     0     17      0  0:00:10  0:00:10 --:--:--    46
[root@master01 ~]#

[root@master01 ~]# kubectl exec --context="CTX_CLUSTER2" -n sample -c sleep \
>     "$(kubectl get pod --context="CTX_CLUSTER2" -n sample -l \
>     app=sleep -o jsonpath='{.items[0].metadata.name}')" \
>     -- curl helloworld.sample:5000/hello
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0Hello version: v2, instance: helloworld-v2-54df5f84b-zg8vg
100    59  100    59    0     0    310      0 --:--:-- --:--:-- --:--:--   310
[root@master01 ~]#
[root@master01 ~]#
[root@master01 ~]# kubectl exec --context="CTX_CLUSTER2" -n sample -c sleep \
>     "$(kubectl get pod --context="CTX_CLUSTER2" -n sample -l \
>     app=sleep -o jsonpath='{.items[0].metadata.name}')" \
>     -- curl helloworld.sample:5000/hello
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   175  100   175    0     0     17      0  0:00:10  0:00:10 --:--:--    45
upstream connect error or disconnect/reset before headers. reset reason: local reset, transport failure reason: TLS error: 268435612:SSL routines:OPENSSL_internal:HTTP_REQUEST[root@master01 ~]#
[root@master01 ~]#




About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 14
  • Comments: 33 (11 by maintainers)

Most upvoted comments

I am using vm sidecar,hava the same error? upstream connect error or disconnect/reset before headers. reset reason: local reset, transport failure reason: TLS error: 268435612:SSL routines:OPENSSL_internal:HTTP_REQUEST