sonobuoy: Sonobuoy I/O timeout issues when using ipv6 address in an air-gapped environment

What steps did you take and what happened: I have an air-gapped Kubernetes setup remotely that is using ipv6. The following architecture is as follows:

The images to be used is in a private registry virtual machine environment. The kubernetes setup is a simple Single-Master with 2 worker nodes. The setup was able to be configured to use ipv6. When trying to run sonobuoy from the private registry, the application runs successfully but returns an i/o timeout error in the logs.

What did you expect to happen: Sonobuoy will pull the images from the private registry and run successfully.

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

[2020-06-29 12:15:53.498] [root@opm bin]# sonobuoy logs --kubeconfig $HOME/bin/a-ipv6-k8s_config [2020-06-29 12:15:56.163] namespace="sonobuoy" pod="sonobuoy" container="kube-sonobuoy" [2020-06-29 12:15:56.163] time="2020-06-29T04:14:37Z" level=info msg="Scanning plugins in ./plugins.d (pwd: /)" [2020-06-29 12:15:56.163] time="2020-06-29T04:14:37Z" level=info msg="Scanning plugins in /etc/sonobuoy/plugins.d (pwd: /)" [2020-06-29 12:15:56.163] time="2020-06-29T04:14:37Z" level=info msg="Directory (/etc/sonobuoy/plugins.d) does not exist" [2020-06-29 12:15:56.163] time="2020-06-29T04:14:37Z" level=info msg="Scanning plugins in ~/sonobuoy/plugins.d (pwd: /)" [2020-06-29 12:15:56.163] time="2020-06-29T04:14:37Z" level=info msg="Directory (~/sonobuoy/plugins.d) does not exist" [2020-06-29 12:15:56.163] time="2020-06-29T04:15:07Z" level=error msg="could not get api group resources: Get https://[<ipv6_address>]:443/api?timeout=32s: dial tcp [<ipv6_address>]:443: i/o timeout" [2020-06-29 12:15:56.163] time="2020-06-29T04:15:07Z" level=info msg="no-exit was specified, sonobuoy is now blocking"

Environment:

  • Sonobuoy version: 0.17.2 and 0.18
  • Kubernetes version: (use kubectl version): Client Version: version.Info{Major:“1”, Minor:“16”, GitVersion:“v1.16.3”, GitCommit:“b3cbbae08ec52a7fc73d334838e18d17e8512749”, GitTreeState:“clean”, BuildDate:“2019-11-13T11:23:11Z”, GoVersion:“go1.12.12”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“16”, GitVersion:“v1.16.3”, GitCommit:“b3cbbae08ec52a7fc73d334838e18d17e8512749”, GitTreeState:“clean”, BuildDate:“2019-11-13T11:13:49Z”, GoVersion:“go1.12.12”, Compiler:“gc”, Platform:“linux/amd64”}

We’ve already created an issue before which can be found here. Can someone help me on this one? Thank you.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 25 (11 by maintainers)

Most upvoted comments

Hi @johnray21216. Given that Sonobuoy is able to connect to the API server and run the first time, nothing fundamentally changes with a second run that would prevent it from communicating.

As I have stated in our conversations on Slack, I strongly suspect that it is an issue with Calico due to the logs where it states that it will drop all traffic for the service account. Yes, you may have configured Calico correctly, however there could still be a bug there. If you haven’t already, I would recommend either reaching out to them on their Slack or Discourse, or opening an issue in their project. (Links here: https://www.projectcalico.org/community/).

Yes, it’s a flag that you use with “sonobuoy run”. It sets an imagePullPolicy of Always on all the sonobuoy pods which forces the kubelet to pull the images each time. It’s the only way that sonobuoy can provide any influence over whether the image is pulled before starting the pods.