falco: Connection error running Falco on Azure (aks-engine) master nodes

Describe the bug

Falco pods running on Kubernetes master nodes don’t work in clusters created with Azure AKS engine.

This is the error shown in the falco logs of those pods:

[...]
2019-12-13T10:24:45+0000: Loading rules from file /etc/falco/falco_rules.yaml:
2019-12-13T10:24:56+0000: Runtime error: Socket handler (k8s_pod_handler_state), error 5 (Connection reset by peer) while connecting to kubernetes.default.svc.cluster.local:443. Exiting.

This is the root issue that causes failures connecting to kubernetes api server from the master nodes https://github.com/Azure/aks-engine/issues/622. It is a known limitation of the current Kubernetes + Azure implementation.

How to reproduce it

  1. Create a cluster with 3 master nodes with aks-engine.
  2. Deploy falco as a daemonset so it runs in all nodes, including masters.
  3. Pods running in the master nodes will crash.

Expected behaviour

Pods running in the master nodes should work as if they were running in any other worker node.

Environment

  • Falco version: 0.18.0
  • Cloud provider or hardware configuration: Azure
  • Installation method: Kubernetes daemonset

Additional context

Even if it’s an issue with Kubernetes + Azure implementation, I’ve opened this issue to track it and see if something can be done from falco’s side.

Original report in slack https://sysdig.slack.com/archives/C19S3J21F/p1576233140152500

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 18 (8 by maintainers)

Most upvoted comments

just a heads up, when switching from k8s 1.16.10 -> 1.18.8 falco started to work normally