kubescape: unable to use kubescape with host scanner on GKE

Description

Unable to use host scanner flag “-enable-host-scan” in GKE.

Environment

OS: 1.24.8-gke.2000 Version: v2.0.183

Steps To Reproduce

Steps to reproduce the behavior:

  1. Create GKE private cluster on GCP
  2. Install most recent version of Kubescape
  3. Start scan with “kubescape scan --enable-host-scan”
  4. See error

Expected behavior

All information gathered by kubescape with --enable-host-scane flag available to validate framewrok controls.

Actual Behavior

Error are displayed: [info] Requesting Host scanner data ◒[warning] the server is currently unable to handle the request (get pods http:host-scanner-4tgcc:7888) [info] Unknown host scanner version ◓[error] failed to get data. path: /kubeletConfigurations; podName: host-scanner-4tgcc; error: the server is currently unable to handle the request (get pods http:host-scanner-4tgcc:7888) ◑[error] failed to get data. path: /kubeletConfigurations; podName: host-scanner-btmnv; error: the server is currently unable to handle the request (get pods http:host-scanner-btmnv:7888) ◐[error] failed to get data. path: /kubeletCommandLine; podName: host-scanner-btmnv; error: the server is currently unable to handle the request (get pods http:host-scanner-btmnv:7888) …

Additional context

During tool execution dedicated namespace and pods are created for host-scanner. Looks like these pods gather required information as I can query them with curl: $ curl 10.0.64.58:7888/cloudProviderInfo {“providerMetaDataAPIAccess”:true} $ curl 10.0.64.58:7888/osRelease NAME=“Container-Optimized OS” ID=cos PRETTY_NAME=“Container-Optimized OS from Google” …

Following errors can be found in pod logs:

Starting Kubescape cluster node host scanner service Host scanner service build version: v1.0.39 {“level”:“warn”,“ts”:“2023-02-03T14:37:34.630138465Z”,“msg”:“Not implemented”} {“level”:“info”,“ts”:“2023-02-03T14:37:34.630186292Z”,“msg”:“Listening…”,“port”:7888} {“level”:“debug”,“ts”:“2023-02-03T14:37:35.485074394Z”,“msg”:“In filterNLogHTTPErrors”,“method”:“GET”,“requestURI”:“/kernelVersion”,“remoteAddr”:“10.0.64.60:54274”} {“level”:“debug”,“ts”:“2023-02-03T14:37:35.485217994Z”,“msg”:“reading file on host file system”,“path”:“/host_fs/proc/version”} [negroni] 2023-02-03T14:37:35Z | 200 | 344.768µs | 10.0.64.60:7888 | GET /kernelVersion {“level”:“debug”,“ts”:“2023-02-03T14:37:37.08216136Z”,“msg”:“In filterNLogHTTPErrors”,“method”:“GET”,“requestURI”:“/kernelVersion”,“remoteAddr”:“10.0.64.60:54280”} {“level”:“debug”,“ts”:“2023-02-03T14:37:37.082298568Z”,“msg”:“reading file on host file system”,“path”:“/host_fs/proc/version”} [negroni] 2023-02-03T14:37:37Z | 200 | 336.96µs | 10.0.64.60:7888 | GET /kernelVersion …

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Reactions: 2
  • Comments: 39 (20 by maintainers)

Most upvoted comments

Good to know @kevin-shelaga , can you try without opening the firewall port and revert that rule?

The goal of the YAML file is to change the network of the host-scanner Pod to work without requiring a firewall opening. If that works for you and @tomekzielins we will integrate this change to the next release of kubescape.

The controlPlaneInfo errors have already been fixed by @alegrey91 and will be available in our next release as well.

Shared VPC -> https://cloud.google.com/vpc/docs/shared-vpc This is a type of VPC created in a host project, which can be shared between multiple other (service) projects. GKE Cluster is created in one of service projects. What is also important we use private clusters, this means that worker nodes do not have public endpoints and communication between nodes and master node uses private IPs only. You can use similar command to create such cluster:

gcloud container clusters create “cluster-1”
–zone=“us-central1-c”
–project=[project ID]
–machine-type “e2-small”
–image-type “COS_CONTAINERD”
–enable-ip-alias
–enable-private-nodes
–master-ipv4-cidr “172.16.0.0/28”
–num-nodes “3”
–network=“projects/[host_project_ID]/global/networks/[shared_vpc_name]”
–subnetwork=“projects/[host_project_ID]/regions/[region]/subnetworks/[subnetwork_name]”
–cluster-secondary-range-name [subnetwork_secondary_range_name_for_pods]
–services-secondary-range-name [subnetwork_secondary_range_name_for_service]
–default-max-pods-per-node “110”
–no-enable-master-authorized-networks
–enable-autoupgrade
–enable-autorepair
–node-locations “us-central1-c”

Indeed the problem was with firewall rules. I had to open port TCP 7888 between GKE control plane IP range and IP range dedicated for nodes. It would be a good idea to add this info to kubescape manual.

@alegrey91 for what it’s worth I don’t believe istio is the issue here. The host scanner is not injected so istio plays no role. As others have mentioned the GKE addon is deprecated but you can 1 command install istio https://istio.io/latest/docs/setup/getting-started/

Minimal profile will suffice.

I would suggest creating a GKE standard private cluster to reproduce.

Very interesting… we’ll take some time to reproduce and debug this issue. @yuleib