jest: Support failing a test if no assertions are ran
Do you want to request a feature or report a bug?
Feature
What is the current behavior?
A bad translation from mocha -> Jest had tests that looked like this:
describe('File Parsing for expects', () => {
it('For a danger test file', async () => {
const parser = new TestFileParser();
await parser.run(`${fixtures}/dangerjs/travis-ci.example`);
expect(parser.expects.length, 2);
});
});
Which passes, but shouldn’t.
This can also happen with promises and any async code being tested.
If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal repository on GitHub that we can npm install
and npm test
.
What is the expected behavior?
Offer a config variable that fails your test if no assertions are used during the test run. IMO default to on too.
Run Jest again with --debug
and provide the full configuration it prints. Please mention your node and npm version and operating system.
N/A - but was running on master, node 7, npm3.
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 9
- Comments: 35 (24 by maintainers)
Got it. It would be nice if a capability like this were to be part of the API. I genuinely want to assert that at least one assertion is called
afterEach
test, and this seems like a perfectly valid use-case.Following up on this one more time.
@mpacary thanks for following, but this isn’t what I need.
After much trial and error, and debugging some unsavory bugs I have fallen back to patching the Jest API to do the thing I wanted (and what the issue is asking for). As I speculated before, I had to add a “plugin” package of sorts to perform runtime checks for missing assertions in tests. See the plugin here - https://github.com/ballercat/jest-plugin-must-assert
I pointed that thing at a repo with 10k+ tests and found some interesting results. For example, any conditional assertions are dangerous(async or not) and result in false positives. For example
… is a super common mistake to make. It’s a huge bummer too, since no async logic is even involved. If the
collection
is empty no assertion run, but the test passes.My final two cents:
Some of the hoops one has to jump through to get this sort of feedback from Jest/test-runner is a bit extreme. I had to for example, use the Zone.js library to ensure Promises/setTimeouts did not register assertions after a test has completed.
I would love for the Jest team to consider some way to improve the integration of assertion library with the actual test runner. The global nature of
except
is kind of a mess. I think that might be the byproduct of building on top of jasmine to begin with, but whatever. Honestly ifexpect
was a callback to the test themselves this would be trivial to implement/support (eg. avajs).Thanks for everyones help!
There’s
expect(() => RenderMyApp()).not.toThrow()
right? It’s verbose but not sure how common that kind of thing is. Ensuring each test makes an assertion, even if that behavior is behind a flag, would be really valuable for async-heavy codebases.One issue with the
afterEach(() => expect.hasAssertions())
approach is that when an exception gets thrown in a test you’d really only want that exception being reported, but becauseafterEach
andbeforeEach
callbacks run regardless of exceptions being thrown in tests you’d get additional unhelpful noise about no assertions having been made.For instance, given a suite like the following:
The output looks like:
The last reported error about missing assertions is unnecessarily noisy.
It would be nice to be able to do something like the following:
Although I do understand wanting to keep details about the currently running spec hidden to the test file, so perhaps something like this could be considered?
Actually calling

expect.hasAssertions()
inafterEach
seems to do the trick. Is this expected? That one could call assertions in before / after each statements? If so, can assertions be called in beforeAll and afterAll statements?The static analysis answer is popular in this issue report, but it’s missing the forrest for the trees. The crux of the problem Is that static analysis via ESLint is sub-par to catch runtime issues.
As an example, the documented “correct” use of
expect
by thejest/expect-expect
plugin demonstrates what I’m talking about.Yes, there is an expect here, but this expect should never execute as the promise is not returned. I/we use these rules all the time, but they do not catch logic issues with non-trivial tests. Which is why we need a broader runtime solution.
If the answer to a global opt-in for this is a plain “no” then it is what is it is. What would be an alternative option? There are test runners which provide this config (ava) but switching is not an option. I’ve been looking into setupFilesAfterEnv is that a possible solution for extending the functionality to support this globally? Maybe via a plugin of some kind?
From where I’m standing it seems like we just need a way to assert that a test finished executing and did not trigger assertions and fail that test. But again, without an explicit
expect.assertions()
etc in each test. Should be do-able.Is there any interest from the Jest maintainers for a
failWithoutAssertions
configuration to be added to the main config? A runtime check for missing assertions is very useful. Specifically for misconfigured asynchronous tests, where an assertion runs after the test has completed. This is a common mistake made by folks new to async testing.It seems like an optional setting in the config should help those of us who could really benefit from it. Working with large teams, makes it kind of difficult to enforce that everyone uses
expect.assertions
and the lint rule is not good enough for the example in the issue itself.More than happy to help if that change would be welcomed.
That is an awesome - I’d like every
it
to check that number to be > 0 at the end of the test.Probably yes (although there may be some cases where lint doesn’t quite get it right, such as using a helper that does the
expect
). I’d be cautious to add more coupling between test runner and assertion library, I think we already have too much of that.Wouldn’t it fit better as an eslint rule?
Actually, the following situation also leads to this noisy reporting, so this is a more general problem with
hasAssertions
. Is this something that you’d consider a UX issue?That’s one of the things I dislike about tap… 😆
end
should only be needed in callbacks,plan
should really only be used if you assert in catch clauses or loops for instance. IMO it just adds noise to the test, making me read more to see what’s actually testedHmm, this makes sense but we do have a legit use case here of making sure something doesn’t fail so I don’t wanna make it more restrictive.