cluster-api: Test framework Gomega errors are not helpful

What steps did you take and what happened: I have a failing E2E test in CAPO. I do not yet know what the error is. The reported error is:

/home/prow/go/src/sigs.k8s.io/cluster-api-provider-openstack/test/e2e/suites/e2e/e2e_test.go:230
Timed out after 1800.001s.
Expected
    <bool>: false
to be true
/root/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/framework/controlplane_helpers.go:167

From this I do not know:

  • Which cluster failed
  • What was expected to be true

What did you expect to happen: I would like this error message to contain enough context to see at a glance what it means, and ideally to contain additional context which might indicate what the error might be.

For example:

Expected
  Kind: KubeadmControlPlane
  Name: <the name of the object I'm interested in>
  Spec:
    <full spec>
  Status:
    Ready: false
    FailureMessage: <quite possibly what I'm looking for>
    <rest of the status>
to match: Status.Ready = True

Anything else you would like to add: I haven’t tried it yet, but I expect that the above is something like what you’d get from:

Eventually(func() (*controlplanev1.KubeadmControlPlane, error) {
  controlplane := ...
  ...
  return controlplane, err
}, intervals...).Should(MatchFields(IgnoreExtras, Fields{
  "Status": MatchFields(IgnoreExtras, Fields{
    "Ready": BeTrue(),
  }),
}))

/kind bug /area testing

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 16 (14 by maintainers)

Most upvoted comments

I think the MatchFields trade off is better, and I was touching some Gomega stuff in another repository with a similar issue litterally yesterday and was thinking the something along these lines would be better. I think optimising for Framework consumers is is better at the slight cost of authors (sorry @sbueringer ! ).

Back to my comment

I agree there is room of improvement on error messages. just wondering if there is a way to break down the problems in smaller, more actionable chunks

I’m personally -1 to a systematic change; It is a massive work and the value added is kind of hard to measure upfront. I would suggest to open focused issue/PR to change things that makes users life hard (reacting to usability reports or selecting top-down a small set of high priority helpers to change).

I think I’ll go systematically over the test framework and then create a new issue with “sub-tasks” (good-first-issue/help-wanted).

(/cc @fabriziopandini)

WDYT?

Thanks for picking this up! Some thoughts:

The MatchFields output is much better, but I still think at a minimum it would need to identify the specific resource being matched, i.e. including the name. That’s going to point me to the right place in artifacts and logs. I haven’t seen the output of PrettyPrint, but assuming it dumps the entire object in a human readable format that sounds ideal in combination with MatchFields.

I also agree that the MatchFields code is uglier, but speaking selfishly as somebody who consumes this code as a library: it’s not making my test any untidier. I think the trade-off is worth it.