gotestsum: Passing tests reported as failures in summary

We are using gotestsum to run our testing for Knative, and digging into some recent “failures”, I can’t really make sense of why gotestsum is reporting them as failed.

The failure log I have been digging through can be downloaded from here, the file you want is “build-log.txt” (it is large).

In this test run I see the following summary:

=== Skipped
=== SKIP: test/e2e TestProbeWhitelist (0.00s)
    probe_whitelist_test.go:38: RequestAuthentication dropped the regex support. We need to rewrite this test. See https://github.com/istio/istio/issues/16585


=== Failed
=== FAIL: test/conformance/api/v1alpha1 TestContainerExitingMsg/http (unknown)

=== FAIL: test/conformance/api/v1alpha1 TestContainerExitingMsg/tcp (unknown)

However, if I scan back through the test logs, I see that all of the flavors of TestContainerExitingMsg have passed, here’s one cluster:

--- PASS: TestContainerExitingMsg (0.00s)
    --- PASS: TestContainerExitingMsg/http (40.29s)
    --- PASS: TestContainerExitingMsg/tcp (172.49s)

I’ve been trying to make sense of the pieces of information here, but basically all I’ve managed so far is that (unknown) seems to tell us that the elapsed time is unknown, which seems to point to a TestCase terminated by the end() method here.

Any pointers would be appreciated. 🙏

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 4
  • Comments: 20 (7 by maintainers)

Commits related to this issue

Most upvoted comments

I got a similar issue, here is the info

╰─ cat main_test.go
package main

import (
        "fmt"
        "testing"
)

func TestSth(t *testing.T) {
        fmt.Printf("%s", "OK")
}
╰─ go test -v ./
=== RUN   TestSth
OK--- PASS: TestSth (0.00s)
PASS
ok      github.com/fatih/vim-go-tutorial        0.548s
╰─ gotestsum -f testname --debug ./
exec: [go test -json ./]
go test pid: 63519
PASS . (cached)
=== RUN   TestSth
OK--- PASS: TestSth (0.00s)
FAIL TestSth (-1.00s)

=== Failed
=== FAIL: . TestSth (unknown)
OK--- PASS: TestSth (0.00s)


DONE 1 tests, 1 failure in 0.170s

--- PASS is not in a new line

when change the ouput line in test case from fmt.Printf("%s", "OK") to fmt.Printf("%s\n", "OK")

everything is fine

so maybe gotestsum can handle this situation.

In my case, I managed to fix the passing-tests-but-reported-as-failed issue after adding a newline to stdin buffer read:

func UserConfirmedViaTerminal(in *bufio.Scanner, ask string) bool {
	fmt.Println(ask)
	fmt.Print("[y/N]: ")
	if err := RequireScan(in); err != nil {
		return false
	}
	answer := strings.TrimSpace(in.Text())
	answer = strings.ToLower(answer)

	// Necessary to let `gotestsum` (and dependencies) parse the test output properly, and not mis-report as failure
	// see https://github.com/gotestyourself/gotestsum/issues/141
	fmt.Println()

	return answer == "y" || answer == "yes"
}

as otherwise output was “corrupted”.

Before:

=== RUN   TestPulumiRefresh/Resolves_pending_operations_properly
[refresh] Do you want to apply to all these stacks?
[y/N]: --- PASS: TestPulumiRefresh (0.00s)

[...]

=== Failed
=== FAIL: local/pulumi TestPulumiRefresh (unknown)

[...]
DONE 15 tests, 4 failures in 0.309s

After:

=== RUN   TestPulumiRefresh
=== RUN   TestPulumiRefresh/Resolves_pending_operations_properly
[refresh] Do you want to apply to all these stacks?
[y/N]:
--- PASS: TestPulumiRefresh (0.00s)

[...]
DONE 15 tests in 2.093s

I’ve worked with some projects that have a similar problem with logs and output. If you aren’t running tests in parallel you can set the global logger to write to a buffer, and only print logs from the buffer when a test ends (or when a test fails). Then reset the logger back to the default afterward.

func TestSomething(t *testing.T) {
    patchLogger(t)
    ....
}

func patchLogger(t *testing.T) {
    orig := log.Writer()
    buf := new(bytes.Buffer)
    log.SetOutput(buf)

    t.Cleanup(func() {
        // optionally check t.Failed here if you only want to print logs on failure
        
        t.Log(buf.String())
        log.SetOutput(orig)
    })
}

Another options would be to always set the log output to ioutil.Discard, although that might make test failures harder to debug. I’m not sure if setting the output to os.Stderr would help, or if that has the same problem.

It would be nice if the go toolchain changed how the test binaries write framing events. If they were sent to a file or pipe then stdout/stderr would not mess with the tests results.

Perfect! Here’s what I’m seeing:

$ grep TestContainerExitingMsg logs_ci-knative-serving-contour-latest_1292985076908298240_artifacts_json_yGQGuuT2.txt | grep '"Action":"pass"'
{"Time":"2020-08-11T01:03:41.641597357Z","Action":"pass","Package":"knative.dev/serving/test/conformance/api/v1","Test":"TestContainerExitingMsg","Elapsed":0}
{"Time":"2020-08-11T01:04:01.649236258Z","Action":"pass","Package":"knative.dev/serving/test/conformance/api/v1beta1","Test":"TestContainerExitingMsg/tcp","Elapsed":27.63}
{"Time":"2020-08-11T01:04:01.649252188Z","Action":"pass","Package":"knative.dev/serving/test/conformance/api/v1beta1","Test":"TestContainerExitingMsg/http","Elapsed":160.37}
{"Time":"2020-08-11T01:04:01.649257987Z","Action":"pass","Package":"knative.dev/serving/test/conformance/api/v1beta1","Test":"TestContainerExitingMsg","Elapsed":0.01}
{"Time":"2020-08-11T01:05:09.935705543Z","Action":"pass","Package":"knative.dev/serving/test/conformance/api/v1alpha1","Test":"TestContainerExitingMsg/http","Elapsed":179.23}
{"Time":"2020-08-11T01:05:09.985943885Z","Action":"pass","Package":"knative.dev/serving/test/conformance/api/v1alpha1","Test":"TestContainerExitingMsg/tcp","Elapsed":194.07}
{"Time":"2020-08-11T01:05:09.985992005Z","Action":"pass","Package":"knative.dev/serving/test/conformance/api/v1alpha1","Test":"TestContainerExitingMsg","Elapsed":0}

For some reason, test2json has Output events for the tests passing, but it never sent an Action: pass event for those 2 sub tests in the knative.dev/serving/test/conformance/api/v1 package.

From what I can tell, this is a new test2json bug that I haven’t seen before. Maybe related to parallel test. If it is not happening on every run, it may have to do with the ordering of the tests. I haven’t looked at the golang issue tracker to see if this issue has been reported already.