test-infra: kubetest/kind: odd bug caused by enabling `--loglevel=debug` for kind

i’m seeing something very odd in the kind deployer for kubetest.

this PR: https://github.com/kubernetes/test-infra/pull/11379/files

is causing:

2019/02/20 16:00:32 kind.go:336: kind.go:IsUp() 2019/02/20 16:00:32 kind.go:457: kind.go:KubectlCommand() 2019/02/20 16:00:32 process.go:153: Running: kubectl get nodes --no-headers The connection to the server localhost:8080 was refused - did you specify the right host or port? 2019/02/20 16:00:32 process.go:155: Step ‘kubectl get nodes --no-headers’ finished in 77.854404ms

2019/02/22 05:07:22 main.go:307: Something went wrong: encountered 2 errors: [error starting kubectl --match-server-version=false version: exec: already started error during kubectl --match-server-version=false get nodes -oyaml: exit status 1]

https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-kubeadm-kind-master/311

can be reproduced locally using:

./kubetest --provider=skeleton --deployment=kind --kind-binary-version=build --kind-config-path=./../../k8s.io/kubeadm/tests/e2e/kind/single-cp/single-cp.yaml --build=bazel --up --check-version-skew=false --ginkgo-parallel --timeout=30m

investigating…

/area kubetest /kind bug /assign cc @BenTheElder @krzyzacy

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 16 (16 by maintainers)

Most upvoted comments

yeah, tryUntil also needs us to fix commands possibly hanging indefinitely. (another thing I want to change at the exec package level). thanks for hunting this down and patching it 🙏

this issue is in the path after all:

~/.kubetest/kind/kind get kubeconfig-path --loglevel=debug --name=kind-kubetest
DEBU[07:23:35] Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}}] 
/home/lubo-it/.kube/kind-config-kind-kubetest

debug adds extra output and pollutes the config path.

diff --git a/pkg/exec/local.go b/pkg/exec/local.go
index 088438e..a8c2f4f 100644
--- a/pkg/exec/local.go
+++ b/pkg/exec/local.go
@@ -19,8 +19,7 @@ package exec
 import (
        "io"
        osexec "os/exec"
-
-       log "github.com/sirupsen/logrus"
+       // log "github.com/sirupsen/logrus"
 )
 
 // LocalCmd wraps os/exec.Cmd, implementing the kind/pkg/exec.Cmd interface
@@ -64,6 +63,6 @@ func (cmd *LocalCmd) SetStderr(w io.Writer) {
 
 // Run runs
 func (cmd *LocalCmd) Run() error {
-       log.Debugf("Running: %v %v", cmd.Path, cmd.Args)
+       // log.Debugf("Running: %v %v", cmd.Path, cmd.Args)
        return cmd.Cmd.Run()
 }

this patch fixes the bug that i’m seeing with --loglevel=debug.

whoah, uhhh. what?

this is the weird part. the problem was introduced with PR https://github.com/kubernetes/test-infra/pull/11379/files

i reproduce this locally.

how is kind#55 related?

the user mentioned that toggling log level fixes a cluster creation bug.

sounds like a race / memory corruption / something else. i will now build custom kind binary and try to feed it in a custom kubetest to check something.

was the kubeconfig path set?

yes.

i’ve added this call to double check:

log.Printf("kind.go:getKubeConfigPath(): %s\n", string(o))

but i’m seeing this, which looks like a mix between kind and kubetest output.

2019/02/22 05:50:59 kind.go:172: kind.go:getKubeConfigPath(): time=“05:50:58” level=debug msg=“Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}}]” /home/lubo-it/.kube/kind-config-kind-kubetest