istio: PeerAuthentication not working with EFK stack in Istio 1.5

Bug description I have an EFK stack installed (Elasticsearch Fluentd Kibana) with Namespace-wide PeerAuthentication to enable mtls. When I use old Policy namespace-wide mtls seems to be working fine but using PeerAuthentication makes connections in namespace to be rejected.

[ ] Configuration Infrastructure [ ] Docs [X ] Installation [ ] Networking [ ] Performance and Scalability [ ] Policies and Telemetry [X ] Security [ ] Test and Release [ ] User Experience [ ] Developer Infrastructure

Expected behavior mtls should be equally enabled with either Policy or PeerAuthentication in a namespace.

Steps to reproduce the bug I have installed the EFK stack using https://github.com/bitnami/kube-prod-runtime , the EFK stack boots up perfectly. When i add the following:

cat << EOF | kubectl apply -f-
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: kubeprod
spec:
  mtls:
    mode: STRICT
EOF

the communication between Elasticsearch pods drops. If I replace it with:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  labels:
    name: default
  name: default
  namespace: kubeprod
spec:
  host: '*.kubeprod.svc.cluster.local'
  subsets: []
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
  labels:
    name: default
  name: default
  namespace: kubeprod
spec:
  origins: []
  peers:
  - mtls: {}
  targets: []

mtls works fine.

Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)

istioctl version --remote

client version: 1.5.0
control plane version: 1.5.0
data plane version: 1.5.0 (5 proxies)

kubectl version

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}

helm version

version.BuildInfo{Version:"v3.1.1", GitCommit:"afe70585407b420d0097d07b21c47dc511525ac8", GitTreeState:"clean", GoVersion:"go1.13.8"}

How was Istio installed?

apiVersion: install.istio.io/v1alpha2
kind: IstioOperator
spec:
  profile: demo
  values:
    sidecarInjectorWebhook:
      rewriteAppHTTPProbe: true
    global:
      controlPlaneSecurityEnabled: true
      mtls:
        enabled: false
      sds:
        enabled: true
    gateways:
      istio-ingressgateway:
        type: LoadBalancer
        sds:
          enabled: true
    kiali:
      enabled: true
      dashboard:
        jaegerURL: "http://jaeger-query:16686"
        grafanaURL: "http://grafana:3000"
    grafana:
      enabled: true

Environment where bug was observed (cloud vendor, OS, etc) minikube version: v1.8.1 commit: cbda04cf6bbe65e987ae52bb393c10099ab62014

Additionally, please consider attaching a cluster state archive by attaching the dump file to this issue. istio-dump.tar.gz

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 32 (9 by maintainers)

Most upvoted comments

http works different. @irizzant Istio is able to see the HTTP host header and route to the correct outbound cluster (which has the client side TLS config), even if you are using pod ip for communication. TCP we can’t inspect the payload for such protocol specific routing.

I have been talking with @hzxuzhonghu we might be able to add another headless service defintion on 9300 port and plus the PILOT_ENABLE_HEADLESS_LISTENER feature, we can then route the pod ip for 9300 to the correct cluster, rather than pass through, and then you need to configure DestinationRule, since the automtls does not work for ORIGIN_DST typed cluster, (what headless service corresponding cluster in Envoy).

I haven’t get a chance to play with it yet.