vsphere-csi-driver: failed to get CsiNodeTopology for the node: no matches for kind "CSINodeTopology" in version "cns.vmware.com/v1alpha1", restarting registration container.

/kind bug

What happened: Trying to install vSphere CSI drivers v2.7.0 with RKE2 cluster v1.24.10+rke2r1.

$ cat /etc/rancher/rke2/config.yaml cloud-provider-name: external

_$ cat csi-vsphere.conf [Global] cluster-id = “${CLUSTER_NAME}” cluster-distribution = “Kubernetes”

[VirtualCenter “172.16.16.110”] insecure-flag = “true” user = “user1@vsphere.local” password = “password12345” port = “443” datacenters = “datacenter1”_

_root@urnpk8sm60:~# kubectl --namespace=vmware-system-csi get all NAME READY STATUS RESTARTS AGE pod/vsphere-csi-controller-7589ccbcf8-6w7pw 0/7 Pending 0 3m2s pod/vsphere-csi-controller-7589ccbcf8-phl5c 0/7 Pending 0 3m2s pod/vsphere-csi-controller-7589ccbcf8-wwwfc 0/7 Pending 0 3m2s pod/vsphere-csi-node-6vljg 2/3 CrashLoopBackOff 4 (79s ago) 3m2s pod/vsphere-csi-node-dpnh9 2/3 CrashLoopBackOff 5 (7s ago) 3m2s pod/vsphere-csi-node-jd4wt 2/3 CrashLoopBackOff 4 (78s ago) 3m2s pod/vsphere-csi-node-wtlp7 2/3 CrashLoopBackOff 4 (72s ago) 3m2s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vsphere-csi-controller ClusterIP 10.43.162.210 <none> 2112/TCP,2113/TCP 3m2s

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/vsphere-csi-node 4 4 0 4 0 kubernetes.io/os=linux 3m2s daemonset.apps/vsphere-csi-node-windows 0 0 0 0 0 kubernetes.io/os=windows 3m2s

NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vsphere-csi-controller 0/3 3 0 3m2s

NAME DESIRED CURRENT READY AGE replicaset.apps/vsphere-csi-controller-7589ccbcf8 3 3 0 3m2s root@urnpk8sm60:~#_ root@urnpk8sm60:~# kubectl --namespace=vmware-system-csi logs pod/vsphere-csi-node-wtlp7 Defaulted container "node-driver-registrar" out of: node-driver-registrar, vsphere-csi-node, liveness-probe I0315 11:27:48.852737 1 main.go:166] Version: v2.5.1 I0315 11:27:48.852835 1 main.go:167] Running node-driver-registrar in mode=registration I0315 11:27:48.854993 1 main.go:191] Attempting to open a gRPC connection with: "/csi/csi.sock" I0315 11:27:48.855119 1 connection.go:154] Connecting to unix:///csi/csi.sock I0315 11:27:48.859495 1 main.go:198] Calling CSI driver to discover driver name I0315 11:27:48.859554 1 connection.go:183] GRPC call: /csi.v1.Identity/GetPluginInfo I0315 11:27:48.859566 1 connection.go:184] GRPC request: {} I0315 11:27:48.875719 1 connection.go:186] GRPC response: {"name":"csi.vsphere.vmware.com","vendor_version":"v2.7.0"} I0315 11:27:48.876170 1 connection.go:187] GRPC error: <nil> I0315 11:27:48.876774 1 main.go:208] CSI driver name: "csi.vsphere.vmware.com" I0315 11:27:48.877323 1 node_register.go:53] Starting Registration Server at: /registration/csi.vsphere.vmware.com-reg.sock I0315 11:27:48.878695 1 node_register.go:62] Registration Server started at: /registration/csi.vsphere.vmware.com-reg.sock I0315 11:27:48.879412 1 node_register.go:92] Skipping HTTP server because endpoint is set to: "" I0315 11:27:49.996391 1 main.go:102] Received GetInfo call: &InfoRequest{} I0315 11:27:49.998477 1 main.go:109] "Kubelet registration probe created" path="/var/lib/kubelet/plugins/csi.vsphere.vmware.com/registration" I0315 11:27:50.069958 1 main.go:120] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:false,Error:RegisterPlugin error -- plugin registration failed with err: rpc error: code = Internal desc = failed to get CsiNodeTopology for the node: "urnpk8sm60". Error: no matches for kind "CSINodeTopology" in version "cns.vmware.com/v1alpha1",} E0315 11:27:50.070058 1 main.go:122] Registration process failed with error: RegisterPlugin error -- plugin registration failed with err: rpc error: code = Internal desc = failed to get CsiNodeTopology for the node: "urnpk8sm60". Error: no matches for kind "CSINodeTopology" in version "cns.vmware.com/v1alpha1", restarting registration container. root@urnpk8sm60:~#

After changing improved-volume-topology: ‘true’ to false in vsphere-csi-driver.yaml, pod/vsphere-csi-node are running but pod/vsphere-csi-controller are still in Pending state due to node affinity/selector.

Warning FailedScheduling 26s default-scheduler 0/4 nodes are available: 4 node(s) didn’t match Pod’s node affinity/selector. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.

What you expected to happen: Same steps are working fine with vanilla Kubernetes but not working with RKE2.

Environment:

  • csi-vsphere version: v2.7.0
  • vsphere-cloud-controller-manager version: 1.24
  • Kubernetes version: v1.24.10+rke2r1
  • vSphere version: 7.0.3
  • OS (e.g. from /etc/os-release): Ubuntu 22.04
  • Kernel (e.g. uname -a): 5.15.0-60-generic
  • Install tools: NA
  • Others: NA

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Reactions: 1
  • Comments: 35 (15 by maintainers)

Most upvoted comments

I also tested with csi v3.1.0 and got the same result.

Before installing csi I have tained master nodes for the pod toleration:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v3.1.0/manifests/vanilla/namespace.yaml

mkdir -p /root/vsan
cat <<'EOXF' > /root/vsan/csi-vsphere.conf
[Global]
cluster-id = "my-k8s-vmw"
cluster-distribution = "native"

[VirtualCenter "172.16.1.1"]
insecure-flag = "true"
user = "k8s-vsphere-csi@local"
password = "password"
port = "443"
datacenters = "dc01"
EOXF

kubectl create secret generic vsphere-config-secret --from-file=/root/vsan/csi-vsphere.conf --namespace=vmware-system-csi

kubectl taint nodes so-m001 node-role.kubernetes.io/control-plane=:NoSchedule
kubectl taint nodes so-m002 node-role.kubernetes.io/control-plane=:NoSchedule
kubectl taint nodes so-m003 node-role.kubernetes.io/control-plane=:NoSchedule

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v3.1.0/manifests/vanilla/vsphere-csi-driver.yaml

controller pod: didn’t match Pod’s node affinity/selector

kubectl get pods,sc,pvc,pv -n vmware-system-csi
NAME                                          READY   STATUS             RESTARTS      AGE
pod/vsphere-csi-controller-699f9799f8-7pq89   0/7     Pending            0             4m25s
pod/vsphere-csi-controller-699f9799f8-9vbcd   0/7     Pending            0             4m25s
pod/vsphere-csi-controller-699f9799f8-pdcc2   0/7     Pending            0             4m25s
pod/vsphere-csi-node-flwvd                    2/3     CrashLoopBackOff   5 (68s ago)   4m25s
pod/vsphere-csi-node-g6g5b                    2/3     CrashLoopBackOff   5 (63s ago)   4m25s
pod/vsphere-csi-node-hmgzf                    2/3     CrashLoopBackOff   5 (76s ago)   4m25s
pod/vsphere-csi-node-wt7sp                    2/3     CrashLoopBackOff   5 (58s ago)   4m25s
pod/vsphere-csi-node-wtgwj                    2/3     CrashLoopBackOff   5 (71s ago)   4m25s
pod/vsphere-csi-node-zvdl7                    2/3     CrashLoopBackOff   5 (80s ago)   4m25s


kubectl describe pod vsphere-csi-controller-699f9799f8-7pq89 -n vmware-system-csi
Name:                 vsphere-csi-controller-699f9799f8-7pq89
Namespace:            vmware-system-csi
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      vsphere-csi-controller
Node:                 <none>
Labels:               app=vsphere-csi-controller
                      pod-template-hash=699f9799f8
                      role=vsphere-csi
Annotations:          <none>
Status:               Pending
IP:                   
IPs:                  <none>
Controlled By:        ReplicaSet/vsphere-csi-controller-699f9799f8
Containers:
  csi-attacher:
    Image:      registry.k8s.io/sig-storage/csi-attacher:v4.3.0
    Port:       <none>
    Host Port:  <none>
    Args:
      --v=4
      --timeout=300s
      --csi-address=$(ADDRESS)
      --leader-election
      --leader-election-lease-duration=120s
      --leader-election-renew-deadline=60s
      --leader-election-retry-period=30s
      --kube-api-qps=100
      --kube-api-burst=100
    Environment:
      ADDRESS:  /csi/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xm6k4 (ro)
  csi-resizer:
    Image:      registry.k8s.io/sig-storage/csi-resizer:v1.8.0
    Port:       <none>
    Host Port:  <none>
    Args:
      --v=4
      --timeout=300s
      --handle-volume-inuse-error=false
      --csi-address=$(ADDRESS)
      --kube-api-qps=100
      --kube-api-burst=100
      --leader-election
      --leader-election-lease-duration=120s
      --leader-election-renew-deadline=60s
      --leader-election-retry-period=30s
    Environment:
      ADDRESS:  /csi/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xm6k4 (ro)
  vsphere-csi-controller:
    Image:       gcr.io/cloud-provider-vsphere/csi/release/driver:v3.1.0
    Ports:       9808/TCP, 2112/TCP
    Host Ports:  0/TCP, 0/TCP
    Args:
      --fss-name=internal-feature-states.csi.vsphere.vmware.com
      --fss-namespace=$(CSI_NAMESPACE)
    Liveness:  http-get http://:healthz/healthz delay=30s timeout=10s period=180s #success=1 #failure=3
    Environment:
      CSI_ENDPOINT:                     unix:///csi/csi.sock
      X_CSI_MODE:                       controller
      X_CSI_SPEC_DISABLE_LEN_CHECK:     true
      X_CSI_SERIAL_VOL_ACCESS_TIMEOUT:  3m
      VSPHERE_CSI_CONFIG:               /etc/cloud/csi-vsphere.conf
      LOGGER_LEVEL:                     PRODUCTION
      INCLUSTER_CLIENT_QPS:             100
      INCLUSTER_CLIENT_BURST:           100
      CSI_NAMESPACE:                    vmware-system-csi (v1:metadata.namespace)
    Mounts:
      /csi from socket-dir (rw)
      /etc/cloud from vsphere-config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xm6k4 (ro)
  liveness-probe:
    Image:      registry.k8s.io/sig-storage/livenessprobe:v2.10.0
    Port:       <none>
    Host Port:  <none>
    Args:
      --v=4
      --csi-address=/csi/csi.sock
    Environment:  <none>
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xm6k4 (ro)
  vsphere-syncer:
    Image:      gcr.io/cloud-provider-vsphere/csi/release/syncer:v3.1.0
    Port:       2113/TCP
    Host Port:  0/TCP
    Args:
      --leader-election
      --leader-election-lease-duration=30s
      --leader-election-renew-deadline=20s
      --leader-election-retry-period=10s
      --fss-name=internal-feature-states.csi.vsphere.vmware.com
      --fss-namespace=$(CSI_NAMESPACE)
    Environment:
      FULL_SYNC_INTERVAL_MINUTES:  30
      VSPHERE_CSI_CONFIG:          /etc/cloud/csi-vsphere.conf
      LOGGER_LEVEL:                PRODUCTION
      INCLUSTER_CLIENT_QPS:        100
      INCLUSTER_CLIENT_BURST:      100
      GODEBUG:                     x509sha1=1
      CSI_NAMESPACE:               vmware-system-csi (v1:metadata.namespace)
    Mounts:
      /etc/cloud from vsphere-config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xm6k4 (ro)
  csi-provisioner:
    Image:      registry.k8s.io/sig-storage/csi-provisioner:v3.5.0
    Port:       <none>
    Host Port:  <none>
    Args:
      --v=4
      --timeout=300s
      --csi-address=$(ADDRESS)
      --kube-api-qps=100
      --kube-api-burst=100
      --leader-election
      --leader-election-lease-duration=120s
      --leader-election-renew-deadline=60s
      --leader-election-retry-period=30s
      --default-fstype=ext4
    Environment:
      ADDRESS:  /csi/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xm6k4 (ro)
  csi-snapshotter:
    Image:      registry.k8s.io/sig-storage/csi-snapshotter:v6.2.2
    Port:       <none>
    Host Port:  <none>
    Args:
      --v=4
      --kube-api-qps=100
      --kube-api-burst=100
      --timeout=300s
      --csi-address=$(ADDRESS)
      --leader-election
      --leader-election-lease-duration=120s
      --leader-election-renew-deadline=60s
      --leader-election-retry-period=30s
    Environment:
      ADDRESS:  /csi/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xm6k4 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  vsphere-config-volume:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  vsphere-config-secret
    Optional:    false
  socket-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-xm6k4:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              node-role.kubernetes.io/control-plane=
Tolerations:                 node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                             node-role.kubernetes.io/master:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  5m3s  default-scheduler  0/6 nodes are available: 6 node(s) didn't match Pod's node affinity/selector. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling..

Labels & Taints I have on the cluster

kubectl describe nodes | egrep "Taints:|Name:"
Name:               so-m001
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Name:               so-m002
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Name:               so-m003
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Name:               so-w001
Taints:             <none>
Name:               so-w002
Taints:             <none>
Name:               so-w003
Taints:             <none>

kubectl get nodes --show-labels
NAME      STATUS   ROLES                       AGE   VERSION          LABELS
so-m001   Ready    control-plane,etcd,master   13d   v1.26.9+rke2r1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=rke2,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=so-m001,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=true,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/master=true,node.kubernetes.io/instance-type=rke2
so-m002   Ready    control-plane,etcd,master   13d   v1.26.9+rke2r1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=rke2,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=so-m002,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=true,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/master=true,node.kubernetes.io/instance-type=rke2
so-m003   Ready    control-plane,etcd,master   13d   v1.26.9+rke2r1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=rke2,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=so-m003,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=true,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/master=true,node.kubernetes.io/instance-type=rke2
so-w001   Ready    <none>                      13d   v1.26.9+rke2r1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=rke2,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=so-w001,kubernetes.io/os=linux,node.kubernetes.io/instance-type=rke2
so-w002   Ready    <none>                      13d   v1.26.9+rke2r1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=rke2,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=so-w002,kubernetes.io/os=linux,node.kubernetes.io/instance-type=rke2
so-w003   Ready    <none>                      13d   v1.26.9+rke2r1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=rke2,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=so-w003,kubernetes.io/os=linux,node.kubernetes.io/instance-type=rke2

failed to get CsiNodeTopology from node

kubectl logs -n vmware-system-csi vsphere-csi-node-flwvd
Defaulted container "node-driver-registrar" out of: node-driver-registrar, vsphere-csi-node, liveness-probe
I1009 21:08:03.930857       1 main.go:167] Version: v2.8.0
I1009 21:08:03.930911       1 main.go:168] Running node-driver-registrar in mode=registration
I1009 21:08:03.931414       1 main.go:192] Attempting to open a gRPC connection with: "/csi/csi.sock"
I1009 21:08:03.931470       1 connection.go:164] Connecting to unix:///csi/csi.sock
I1009 21:08:03.932081       1 main.go:199] Calling CSI driver to discover driver name
I1009 21:08:03.932093       1 connection.go:193] GRPC call: /csi.v1.Identity/GetPluginInfo
I1009 21:08:03.932096       1 connection.go:194] GRPC request: {}
I1009 21:08:03.933865       1 connection.go:200] GRPC response: {"name":"csi.vsphere.vmware.com","vendor_version":"v3.1.0"}
I1009 21:08:03.933873       1 connection.go:201] GRPC error: <nil>
I1009 21:08:03.933879       1 main.go:209] CSI driver name: "csi.vsphere.vmware.com"
I1009 21:08:03.933919       1 node_register.go:53] Starting Registration Server at: /registration/csi.vsphere.vmware.com-reg.sock
I1009 21:08:03.934026       1 node_register.go:62] Registration Server started at: /registration/csi.vsphere.vmware.com-reg.sock
I1009 21:08:03.934616       1 node_register.go:92] Skipping HTTP server because endpoint is set to: ""
I1009 21:08:05.325659       1 main.go:102] Received GetInfo call: &InfoRequest{}
I1009 21:08:05.326700       1 main.go:109] "Kubelet registration probe created" path="/var/lib/kubelet/plugins/csi.vsphere.vmware.com/registration"
I1009 21:08:05.344075       1 main.go:121] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:false,Error:RegisterPlugin error -- plugin registration failed with err: rpc error: code = Internal desc = failed to get CsiNodeTopology for the node: "so-m003". Error: no matches for kind "CSINodeTopology" in version "cns.vmware.com/v1alpha1",}
E1009 21:08:05.344095       1 main.go:123] Registration process failed with error: RegisterPlugin error -- plugin registration failed with err: rpc error: code = Internal desc = failed to get CsiNodeTopology for the node: "so-m003". Error: no matches for kind "CSINodeTopology" in version "cns.vmware.com/v1alpha1", restarting registration container.

Controller csi-attacher out of…

kubectl logs -n vmware-system-csi vsphere-csi-controller-699f9799f8-7pq89
Defaulted container "csi-attacher" out of: csi-attacher, csi-resizer, vsphere-csi-controller, liveness-probe, vsphere-syncer, csi-provisioner, csi-snapshotter

And in the manifest file https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/v3.1.0/manifests/vanilla/vsphere-csi-driver.yaml

I see only these podAntiAffinity that are supposed to prevent multiple pods and run only one instance per node. The tolerations are supposed to let pods run on control-plane master nodes:

spec:
      priorityClassName: system-cluster-critical # Guarantees scheduling for critical system pods
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - vsphere-csi-controller
              topologyKey: "kubernetes.io/hostname"
      serviceAccountName: vsphere-csi-controller
      nodeSelector:
        node-role.kubernetes.io/control-plane: ""
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists
          effect: NoSchedule
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
          effect: NoSchedule

How do I fix Error: no matches for kind "CSINodeTopology" in version "cns.vmware.com/v1alpha1", restarting registration container. and FailedScheduling 5m3s default-scheduler 0/6 nodes are available: 6 node(s) didn't match Pod's node affinity/selector. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling..?

@qdrddr After looking at labels on your nodes, I think you need to change nodeSelector in the deployment yaml file. Current nodeSelector is:

      nodeSelector:
        node-role.kubernetes.io/control-plane: ""

Try to change it as mentioned below and see if it works:

      nodeSelector:
        node-role.kubernetes.io/control-plane: "true"

As per my observation, on RKE k8s cluster’s master node we get label as “node-role.kubernetes.io/control-plane=true”, whereas normal on-prem k8s cluster’s master node has label “node-role.kubernetes.io/control-plane=”.

Please let us know if pod scheduling works after making this change. I will recommend deleting old deployment first and then re-deploy CSI driver after making this change.

guys I have the same problem, I am using however version 3.0.0 I have the pods the CrashLoopBackOff:

vsphere-csi-controller-68c65dbdd5-cb9jb   0/7     Pending            0             19m
vsphere-csi-controller-68c65dbdd5-whswk   0/7     Pending            0             19m
vsphere-csi-node-9qlc6                    2/3     CrashLoopBackOff   5 (28s ago)   3m40s
vsphere-csi-node-h9hkq                    2/3     CrashLoopBackOff   5 (30s ago)   3m40s
vsphere-csi-node-nbvfp                    2/3     CrashLoopBackOff   5 (45s ago)   3m40s

and going into the logs in one of the pods I get this:

Defaulted container "node-driver-registrar" out of: node-driver-registrar, vsphere-csi-node, liveness-probe
I0403 22:57:51.418542       1 main.go:167] Version: v2.7.0
I0403 22:57:51.418588       1 main.go:168] Running node-driver-registrar in mode=registration
I0403 22:57:51.419473       1 main.go:192] Attempting to open a gRPC connection with: "/csi/csi.sock"
I0403 22:57:51.419515       1 connection.go:154] Connecting to unix:///csi/csi.sock
I0403 22:57:51.420762       1 main.go:199] Calling CSI driver to discover driver name
I0403 22:57:51.420772       1 connection.go:183] GRPC call: /csi.v1.Identity/GetPluginInfo
I0403 22:57:51.420776       1 connection.go:184] GRPC request: {}
I0403 22:57:51.424195       1 connection.go:186] GRPC response: {"name":"csi.vsphere.vmware.com","vendor_version":"v3.0.0"}
I0403 22:57:51.424239       1 connection.go:187] GRPC error: <nil>
I0403 22:57:51.424247       1 main.go:209] CSI driver name: "csi.vsphere.vmware.com"
I0403 22:57:51.424312       1 node_register.go:53] Starting Registration Server at: /registration/csi.vsphere.vmware.com-reg.sock
I0403 22:57:51.424466       1 node_register.go:62] Registration Server started at: /registration/csi.vsphere.vmware.com-reg.sock
I0403 22:57:51.424537       1 node_register.go:92] Skipping HTTP server because endpoint is set to: ""
I0403 22:57:52.522333       1 main.go:102] Received GetInfo call: &InfoRequest{}
I0403 22:57:52.522670       1 main.go:109] "Kubelet registration probe created" path="/var/lib/kubelet/plugins/csi.vsphere.vmware.com/registration"
I0403 22:57:52.533985       1 main.go:121] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:false,Error:RegisterPlugin error -- plugin registration failed with err: rpc error: code = Internal desc = failed to get CsiNodeTopology for the node: "k8s-worker02". Error: no matches for kind "CSINodeTopology" in version "cns.vmware.com/v1alpha1",}
E0403 22:57:52.534009       1 main.go:123] Registration process failed with error: RegisterPlugin error -- plugin registration failed with err: rpc error: code = Internal desc = failed to get CsiNodeTopology for the node: "k8s-worker02". Error: no matches for kind "CSINodeTopology" in version "cns.vmware.com/v1alpha1", restarting registration container.

I’m following step by step this guide: https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-54BB79D2-B13F-4673-8CC2-63A772D17B3C.html

My env consists of: k8s cluster 1.26.3 1 master node 2 worker node esxi 7.0.3 vcenter 7.0.3