falco: Following the documentation for minikube deployment doesn't work
Describe the bug
Following the documentation for creating a “Falco Learning Environment” unfortunately does not work. Upon using helm to deploy the Falco and getting this error in the pod logs:
Tue Mar 15 15:25:42 2022: Runtime error: Kernel module does not support PPM_IOCTL_GET_API_VERSION. Exiting.
Workaround I went on to try a handful of other virtual machine drivers for minikube to no avail. I resorted to the Kubernetes slack where I got help from @terylt.
As it turns out, it seems that there is a script that runs at startup to try and install the kernel module / ebpf probe necessary to get Falco running in the relevant environment. The bash script seems to do some guesswork to determine what operating sytem it is in, then tries to decide the correct approach to install the kernel module / ebpf probe. This does not currently work correctly for minikube.
to get the script to pass through the logic linked here (and hence correctly determine that it is in a minikube vm), the daemonset must be modified like so, after a helm template or a kubectl edit after deployment:
containers:
- name: falco
...
volumeMounts:
...
- mountPath: /host/etc/VERSION
name: etc-fs
readOnly: true
...
volumes:
...
- name: etc-fs
hostPath:
path: /etc/VERSION
...
The daemonset also needs to have eBPF enabled, as otherwise it continues to fail. This can either be done by setting the env var on the falco pod in the manifest:
env:
- name: FALCO_BPF_PROBE
or by enabling eBPF in the helm values file:
ebpf:
# Enable eBPF support for Falco
enabled: true
once these two steps have been taken, the pod should turn to a READY state:
NAME READY STATUS RESTARTS AGE
falco-falco-blfrb 1/1 Running 0 24m
How to reproduce it Follow the documentation for creating a learning environment with minikube
Expected behaviour
I feel others thoughts might be mixed on this… but to me it doesn’t seem unreasonable to request the user to specify the environment they are deploying to (e.g. GKE, minikube, kind etc.) in the form of an env var or a command-line argument. This way, there is no need for a script to exist or be maintained when inevitably, situations arise that break the mechanisms in which the script tries to decipher the environment it is in.
Environment
- Falco version: 0.31.1
- Cloud provider or hardware configuration:
- OS: Minikube (v1.25.0 - commit: 3edf4801f38f3916c9ff96af4284df905a347c86)
- Installation method: Helm on Kubernetes
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 3
- Comments: 23 (18 by maintainers)
Minikube ships its own Falco pre-built driver (see https://github.com/kubernetes/minikube/pull/6560) since it is impossible to build the driver on the fly for Minikube because it doesn’t provide a compiler or kernel-headers.
Unfortunately, the last update in Minikube was two years ago 👇 https://github.com/kubernetes/minikube/tree/master/deploy/iso/minikube-iso/package/falco-module
The driver version they ship should work up to Falco 0.30.x, but not with 0.31.1. We should open a PR in minikube to fix it.
Hi @leogr, here are my findings on Falco and Minikube.
The issue is still present even with
Falco 0.32.0. That’s because the latest version of minikubev1.25.2ships with the85c88952b018fdbce2464222c3303229f5bfcfadversion of Falco kernel module. It works fine with Falco0.31.0but not with later versions of Falco. Minikube developers have already bumped version of Falco to0.31.1(https://github.com/kubernetes/minikube/commit/69fb8c243256d407402d754bfa562a38aa794129) but we need to wait for the next release of Minikube for that.There are two options in order to use the latest version of Falco with Minikube:
@alacuku Thank you! 🙏
Please put
Fixes https://github.com/falcosecurity/falco/issues/1941in thefalco-website’s PR you will open, so we both track and automatically close this once you have done with the docs.Not sure what is going on. The module is installed on the host, so it is still present after pods get unscheduled. The bug was that 0.31.1 was not able to upgrade the module. The 0.32.0 fixed the issue.
For pods in the restart loop, I guess for some reason the driver is not found on our DBG and the falco-driver-loader script can’t build it on the fly. Could you provide some logs?
Anyway, I think yours is a different problem. It would be better to open a dedicated issue,.
Note that 0.31.1 has a newer driver (ie. kernel module) version than 0.31.0.
AFAIK, the problem arises when an old driver is already installed and loaded. Basically, when one previously installed an older version and then installs the 0.31.1, the old driver remains up and running and
Runtime error: Kernel module does not support PPM_IOCTL_GET_API_VERSION. Exiting.is returned.The workaround is to uninstall the old driver manually before installing the 0.31.1.
PS
Falco 0.32.0 (not yet released) will come with a fix that forces the driver uninstallation when upgrading to a newer version.
This does not affect only minikube. I have it on Ubuntu 18.04
Downgrading to 0.31.0 restores functionality