deployKF: Degraded status of `deploykf-auth` and `deploykf-dashboard` on Windows WSL2

Checks

  • I have searched the existing issues.
  • This issue is NOT specific to the CLI. (If so, please open an issue on the CLI repo)

deployKF Version

0.1.2

Kubernetes Version

Client Version: v1.27.2
Server Version: v1.27.2

Description

I followed https://www.deploykf.org/guides/getting-started/ to setup an deploykf instance with the given app-of-apps.yaml. After syncing the applications with the script sync_argocd_apps.sh the apps deploykf-auth and deploykf-dashboard are ending up into a degraded state. See also the attached screenshot:

image

Relevant Logs

$ kubectl logs dex-57997d4c7b-86rb8 -n deploykf-auth
Error from server (BadRequest): container "dex" in pod "dex-57997d4c7b-86rb8" is waiting to start: PodInitializing
$ kubectl describe pods dex-57997d4c7b-86rb8 -n deploykf-auth
Name:             dex-57997d4c7b-86rb8
Namespace:        deploykf-auth
Priority:         0
Service Account:  dex
Node:             docker-desktop/192.168.65.3
Start Time:       Sun, 08 Oct 2023 17:39:19 +0200
Labels:           app.kubernetes.io/component=dex
                  app.kubernetes.io/instance=dkf-core--deploykf-auth
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=deploykf-auth
                  pod-template-hash=57997d4c7b
                  security.istio.io/tlsMode=istio
                  service.istio.io/canonical-name=deploykf-auth
                  service.istio.io/canonical-revision=latest
Annotations:      checksum/secret-dex: b1483a4b8cfee3940155d84a35fa3d3a2e19347a3f65012bbffb8be6b2b55348
                  checksum/secret-oauth2-proxy: c39ba2d928ff5c8c4c48e1735ec04e9319de638d6a45f77731de8f218c0e6f9d
                  kubectl.kubernetes.io/default-container: dex
                  kubectl.kubernetes.io/default-logs-container: dex
                  prometheus.io/path: /stats/prometheus
                  prometheus.io/port: 15020
                  prometheus.io/scrape: true
                  sidecar.istio.io/status:
                    {"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["workload-socket","credential-socket","workload-certs","istio-env...
Status:           Pending
IP:               10.1.1.89
IPs:
  IP:           10.1.1.89
Controlled By:  ReplicaSet/dex-57997d4c7b
Init Containers:
  istio-init:
    Container ID:  docker://d3b44ccacb60dca53fcd8c164842465d0a759c5b86a1a0b822a2e6eb117e1ee0
    Image:         docker.io/istio/proxyv2:1.17.3-distroless
    Image ID:      docker-pullable://istio/proxyv2@sha256:da91c6038778ac78a177f9ab5b136f160ec5d08401302f2ec022bc480baa39b4
    Port:          <none>
    Host Port:     <none>
    Args:
      istio-iptables
      -p
      15001
      -z
      15006
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x

      -b
      *
      -d
      15090,15021,15020
      --log_output_level=default:info
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Sun, 08 Oct 2023 18:00:22 +0200
      Finished:     Sun, 08 Oct 2023 18:00:22 +0200
    Ready:          False
    Restart Count:  9
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:     100m
      memory:  128Mi
    Environment:
      ISTIO_META_DNS_AUTO_ALLOCATE:  true
      ISTIO_META_DNS_CAPTURE:        true
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7kcmm (ro)
Containers:
  istio-proxy:
    Container ID:
    Image:         docker.io/istio/proxyv2:1.17.3-distroless
    Image ID:
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
      --concurrency
      2
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:      100m
      memory:   128Mi
    Readiness:  http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
    Environment:
      JWT_POLICY:                    third-party-jwt
      PILOT_CERT_PROVIDER:           istiod
      CA_ADDR:                       istiod.istio-system.svc:15012
      POD_NAME:                      dex-57997d4c7b-86rb8 (v1:metadata.name)
      POD_NAMESPACE:                 deploykf-auth (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      SERVICE_ACCOUNT:                (v1:spec.serviceAccountName)
      HOST_IP:                        (v1:status.hostIP)
      PROXY_CONFIG:                  {"proxyMetadata":{"ISTIO_META_DNS_AUTO_ALLOCATE":"true","ISTIO_META_DNS_CAPTURE":"true"},"holdApplicationUntilProxyStarts":true}

      ISTIO_META_POD_PORTS:          [
                                         {"name":"http","containerPort":5556,"protocol":"TCP"}
                                         ,{"name":"telemetry","containerPort":5558,"protocol":"TCP"}
                                     ]
      ISTIO_META_APP_CONTAINERS:     dex
      ISTIO_META_CLUSTER_ID:         Kubernetes
      ISTIO_META_NODE_NAME:           (v1:spec.nodeName)
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_META_WORKLOAD_NAME:      dex
      ISTIO_META_OWNER:              kubernetes://apis/apps/v1/namespaces/deploykf-auth/deployments/dex
      ISTIO_META_MESH_ID:            cluster.local
      TRUST_DOMAIN:                  cluster.local
      ISTIO_META_DNS_AUTO_ALLOCATE:  true
      ISTIO_META_DNS_CAPTURE:        true
      ISTIO_KUBE_APP_PROBERS:        {"/app-health/dex/livez":{"httpGet":{"path":"/healthz/live","port":5558,"scheme":"HTTP"},"timeoutSeconds":1},"/app-health/dex/readyz":{"httpGet":{"path":"/healthz/ready","port":5558,"scheme":"HTTP"},"timeoutSeconds":1}}
    Mounts:
      /etc/istio/pod from istio-podinfo (rw)
      /etc/istio/proxy from istio-envoy (rw)
      /var/lib/istio/data from istio-data (rw)
      /var/run/secrets/credential-uds from credential-socket (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7kcmm (ro)
      /var/run/secrets/tokens from istio-token (rw)
      /var/run/secrets/workload-spiffe-credentials from workload-certs (rw)
      /var/run/secrets/workload-spiffe-uds from workload-socket (rw)
  dex:
    Container ID:
    Image:         ghcr.io/dexidp/dex:v2.37.0
    Image ID:
    Ports:         5556/TCP, 5558/TCP
    Host Ports:    0/TCP, 0/TCP
    Args:
      dex
      serve
      /etc/dex/config.yaml
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:15020/app-health/dex/livez delay=15s timeout=1s period=30s #success=1 #failure=3
    Readiness:      http-get http://:15020/app-health/dex/readyz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      GOMPLATE_LEFT_DELIM:                   <<
      GOMPLATE_RIGHT_DELIM:                  >>
      CONFIG__CLIENT_SECRET__OAUTH2_PROXY:   <set to the key 'client_secret' in secret 'generated--dex-oauth2-proxy-client'>   Optional: false
      CONFIG__CLIENT_SECRET__MINIO_CONSOLE:  <set to the key 'client_secret' in secret 'generated--dex-minio-console-client'>  Optional: false
      CONFIG__CLIENT_SECRET__ARGO_SERVER:    <set to the key 'client_secret' in secret 'generated--dex-argo-server-client'>    Optional: false
    Mounts:
      /etc/dex/config.yaml from dex-config (ro,path="config.yaml")
      /srv/dex/web/themes/light/favicon.png from dex-theme (ro,path="favicon.svg")
      /srv/dex/web/themes/light/logo.svg from dex-theme (ro,path="logo.svg")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7kcmm (ro)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  workload-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  credential-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  workload-certs:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  istio-envoy:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  istio-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  istio-podinfo:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
      metadata.annotations -> annotations
  istio-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  43200
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  dex-config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  dex-config
    Optional:    false
  dex-theme:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dex-theme
    Optional:  false
  kube-api-access-7kcmm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  24m                   default-scheduler  Successfully assigned deploykf-auth/dex-57997d4c7b-86rb8 to docker-desktop
  Normal   Pulled     23m (x5 over 24m)     kubelet            Container image "docker.io/istio/proxyv2:1.17.3-distroless" already present on machine
  Normal   Created    22m (x5 over 24m)     kubelet            Created container istio-init
  Normal   Started    22m (x5 over 24m)     kubelet            Started container istio-init
  Warning  BackOff    4m25s (x93 over 24m)  kubelet            Back-off restarting failed container istio-init in pod dex-57997d4c7b-86rb8_deploykf-auth(65a16450-3504-4458-a81e-ee939b69daff)


### deployKF Values (Optional)

_No response_

About this issue

  • Original URL
  • State: closed
  • Created 9 months ago
  • Comments: 16 (7 by maintainers)

Most upvoted comments

@mircomarahrens Sorry for the delay, was a bit bogged down getting the Kubeflow 1.8 release ready upstream!

But I have published a new “local quickstart” for deployKF, which explains how to deploy on Windows, within WSL 2 (which currently requires using a custom Linux Kernel).

@mircomarahrens yea, if it’s possible to get Podman working with k3d on Windows, we should have a pop-out that explains how.

I have already put a pop-out for macOS, see “Can I use Podman instead of Docker Desktop?” in the requirements for macOS, but I am not sure if Podman on Windows is as mature, or compatible in the same way.

If you are familiar you could try it out and report back?

@mircomarahrens yep, just give me some time to get it compiled and update the instructions https://github.com/deployKF/WSL2-Linux-Kernel

I believe you are encountering this upstream issue https://github.com/istio/istio/issues/37885, which seems to be an incompatibility with Windows WSL2 and the Istio “DNS proxy” feature (which deployKF uses).

I will take a look at what needs to be done to get it working, as I am about to publish a “local quickstart” for Windows, so I want to get it working if I can.