ava: Interrupting test execution on first exception (through assertion extension API)

As of version 0.21.0, Ava’s built-in assertions do not influence the execution of the test body. Specifically, when an assertion is violated, the test continues to execute.

This behavior was previously reported as a bug (via gh-220). It was considered “fixed” (via gh-259) with a patch that modified Ava’s output.

This is a feature request for a behavior change: forcibly interrupt test execution when an assertion fails via a runtime exception.

My use case is taking screen shots to assist debugging of integration test failures. I have been able to react to failing tests programatically through a combination of the afterEach and afterEach.always methods. When a test fails, I would like to capture an image of the rendered application as this can be very useful in identifying the cause of the failure (especially when the tests run remotely on a continuous integration server).

Because the test body continues to execute following the failure, by the time the afterEach.always method is invoked, the rendered output may no longer reflect the state of the application at the moment of failure.

For unit tests, this might be addressed by making test bodies shorter and more direct. Reducing tests to contain only one meaningful interaction would avoid the effect described above. Because integration tests have high “set up” costs and because they are typically structured to model complete usage scenarios, this is not an appropriate solution for my use case.

Ava supports the usage of a general-purpose assertion library (e.g. as Node.js’s built-in assert module), and I’ve found that because these libraries operate via JavaScript exceptions, they produce the intended results. In the short-term, I am considering switching to one of these libraries. However, Ava’s built-in assertions have a number of advantages over generic alternatives. In addition, restricting the use of Ava’s API in my test suite will be difficult moving forward–even with documentation in place, contributors may not recognize that certain aspects of the API are considered “off limits”, especially considering that their usage does not directly effect test correctness.

I haven’t been able to think of a use case that would be broken by the change I am requesting. But if there is such a use case, then this behavior could be made "opt-in’ via a command-line flag.

Thanks for your time, and thanks for the great framework!

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 23 (16 by maintainers)

Most upvoted comments

I’ve been using AVA for my latest test project and this is something that has taken me by surprise. Interrupting execution on first assertion is the common behavior of most test frameworks out there and AVA is the very first test framework that does not do that, at least in my experience.

In addition to the issues mentioned above, I would like to add: lack of proper documentation of such behavior. I could only find a related sentence in https://github.com/avajs/ava/blob/master/docs/03-assertions.md#assertions stating:

If multiple assertion failures are encountered within a single test, AVA will only display the first one.

which, imo, is not enough to clearly understand that execution won’t be interrupted upon an assertion failure and, on the other hand, could confuse people coming from different test frameworks.

In my specific case, this behavior is making my test suite slower since I’m asserting after some explicit setup steps and the thing is that the rest of the test including slow UI interactions and the rest of the assertions do not make sense anymore since they are destined to fail, so what’s the point to keep on executing them? Arguably, you can say that my test are designed incorrectly and you may be right, but I feel I would have designed my tests differently if I had known of this behavior since the beginning.

That said, I have some bandwith to try to fix this so I’m open to discuss any approach you may already have.

I’ve filed https://github.com/avajs/ava/issues/2455 to have our assertions return booleans, which would go some way towards helping folks prevent “expensive” operations if a test has already failed.

Hey @gavinpc-mindgrub, no worries, and thanks.

Undoubtedly we could improve our reporter and make it clearer when assertions failed, and when logs were emitted, relative to each other. That should help with debugging.

What’s interesting about running tests is that they tend to pass! So while you need to make sure your test doesn’t pass when it should have failed, you don’t necessarily have to be defensive either. A null-pointer crash will still fail your test.

While I don’t think we should add a top-level configuration to change this behavior, I am experimenting with a way of creating more specialized test functions, which could be set up to fail immediately when an assertion fails. Keep an eye on #2435 for the low-level infrastructure work.

Hi @novemberborn, thanks for the reply.

First I want to apologize for the harsh tone that I used in my earlier post. It has been a long, sleepless week for me, and although I did not intend to be critical without being constructive, I see in retrospect that I was. Thank you, and the rest of the team, for your work on this useful project.

To answer your question, I would say that not knowing about this behavior cost me some considerable time, as @karimsa put it,

in order to figure out which logs actually led up to the failure

I was questioning much more basic aspects of reality, and how I could possibly be seeing what I could be seeing, before realizing that the most recent log from a certain point was not the one associated with “the” failed assertion.

Now that I know it, of course, I can adjust by interpreting the output according to this understanding and, more often, by temporarily throwing as necessary to make this correlation more certain.

Moreover, I have reconsidered some aspects of my test writing practice. In particular, I was using assertions in many places to, e.g. check the validity of a response before looking at further aspects of it (as those further checks would be pointless against a missing object). Although our team has historically used assertions for such checks, I can see how a throw is more appropriate in such cases. When the condition is trivial (usually just a truthiness test), there’s no downside to losing the more refined comparison ops available on t.

In short, although I haven’t found a way to use the current behavior to advantage, it’s something we can live with, probably even without resorting to the “fail fast” option.

That said, we also use TypeScript heavily and would benefit from a corresponding set of assertions that did throw, if only to take advantage of type narrowing. I would welcome changes such as are being discussed in #2450 and others. We have some wrappers of our own for the most common cases, and now I understand that they didn’t in fact mean what they were saying.

Thanks again!

This also makes tests more difficult to debug using logs. In my case, I’ve got all events being logged (such as jobs being executed) which continue to run even after an assertion fails. So in order to figure out which logs actually led up to the failure, I have to insert a throw or return to forcefully stop the test right after the failure - which is quite annoying.

To recap, assertion failures cannot be observed by the test implementation. For tests that repeatedly have to wait on a slow operation this means that the test run takes longer that strictly necessary.

My proposal is to have a way to configure AVA so it throws an otherwise irrelevant exception when an assertion fails, thus preventing the remainder of the test implementation from executing. I don’t want to default to this or make it seem like something you should enable to make tests “run faster”. We’re also interested in reporting multiple assertion failures.

#1692 proposes a new feature whereby tests can observe the results of their assertions. @jugglinmike’s WebDriver problem could be solved by writing an AVA-specific adapter that uses this feature. I’d like to see how that works before implementing the proposal I suggested in this thread.

(Potentially, rather than hooking into an event we’d let you configure t.try() with this behavior.)