kotlinx.coroutines: The first invocation of mockk exceeds the `runTest` default timeout
Run Test Timeout Issues
The new 10s runTest timeout which was introduced by https://github.com/Kotlin/kotlinx.coroutines/pull/3603 is currently creating a lot of issues.
There are various discussions about it here:
- https://github.com/Kotlin/kotlinx.coroutines/issues/3270
- https://kotlinlang.slack.com/archives/C1CFAFJSK/p1688240798449439
And it boils down to basically this: Users of ByteBuddy (for instance mockk users) face a very long initialization phase on the very first mocking initialization.
Example
For a Mac M1 Max, this test:
@Test
fun myMockingTest() {
measureTime {
mockk<UUID>()
}.also {
println("That took $it")
}
}
Takes about 1.5 seconds on my machine. CI machines are often times slower than that and might run compilations in parallel or even run multiple projects at once using virtualization. This has lead to the fact that we do see a lot of sporadically failing CI jobs for tests which don’t do much, basides switching to an IO dispatcher to write to a test database.
Proposed Solution
I proposed several solutions here: https://kotlinlang.slack.com/archives/C1CFAFJSK/p1688326989830229?thread_ts=1688240798.449439&cid=C1CFAFJSK
With my preferred one being:
Making the test timeout configurable through a global option (i.e. an env variable that can be set for all tests from gradle).
That way when running tests locally they could quickly time out but depending on the setup, developers could configure a longer time on the CI where they know that the machine is actually slow
About this issue
- Original URL
- State: closed
- Created a year ago
- Reactions: 14
- Comments: 23 (9 by maintainers)
Commits related to this issue
- Rollback runTest timeout to 60 seconds, configure it with getenv This commit attempts to fix #3800. The first part of the fix, reverting the timeout to 60 seconds, is successful. The second part, all... — committed to Kotlin/kotlinx.coroutines by dkhalanskyjb 7 months ago
- Rollback runTest timeout to 60 seconds, configure it with getenv (#3945) This commit attempts to fix #3800. The first part of the fix, reverting the timeout to 60 seconds, is successful. The secon... — committed to Kotlin/kotlinx.coroutines by dkhalanskyjb 7 months ago
Just to chime in on this: we’re having thousands of tests, and we’re quite affected by the new Coroutine change dropping the timeouts from 60s to 10s. Some teams use mockk, some use Robolectric, some just do more stuff (e.g. Koin, just doing more things, etc.).
Not having a global timeout that we can change now forces us to spend developer hours on finding the tests that now became flaky. This is also quite tricky, considering that most tests run fine under normal circumstances, but somehow fail when run on an older Intel Mac, or on a slow CI instance, or just some other application (Slack/Teams/Chrome/etc.) wastes CPU cycles on a personal developer machine, or some Gradle workers decided to run on the same CPU.
Having a (let’s say opt-in or deprecated) timeout that we could change globally would have at least eased our pain, because then we would have had more time to upgrade.
Another plus for a configurable global runTest timeout: by setting this to an extreme value (e.g. 1 second) we would be able to:
This solution assumes that there is a class that all tests inherit, neither in our project nor in many projects which I know it’s not the case.
It would be more reasonable if it would be a simple way to do this in a multi-module project to specify a custom test runner for all unit tests in all modules, but it’s not the case.
One more problem with this proposal is that it will slow down every single affected test because of bytebyddy init, even ones that do not use mocking, and make the whole suite of tests is slower, not faster.
We do not override setMain by default because it’s unnecessary for most tests, so there is no base test class. Situations between slow test init and setMain are very different because in case if setMain is required, test will fail immediately and will be fixed, but in case of timeout caused by long environment init, it most probably will pass locally, and maybe even on CI, but become a flaky test
10 seconds is a very arbitrary timeout, and an option to configure it would be very beneficial I see at least to reasonable scenarios:
initializeByteBuddybefore the test works for the first case, it doesn’t work for this case when I just want to make sure that my tests are fastIt will never be a timeout that works for all projects, and the suggestion to significantly change the structure of all tests by using the user’s wrapper on top of runTest or by having some base class doesn’t look very friendly for projects affected by this. Another issue is that it increases the chance of a developer’s error if one accidentally uses runTest or creates a test class without extending this base class, test will probably pass but would introduce flakiness to the whole suite.
Using jvmArgs for junit runner probably would be the best solution, so at least it would be configured not on the level of a particular test but on the level of a module (and for most projects, it would be easy to apply it to all modules if necessary, or only to ones which have larger classpath, mocking libraries and so on)
After a small interview with @mreichelt, here are some crucial points that I believe weren’t raised before but are relevant to real-world projects that are large enough:
@BeforeTestfunction, but not always. There’s no clear way to make such tests run faster.I think these are very solid arguments for restoring the 60-second timeout and providing a well-established way to configure it globally. Let’s consider two groups of projects:
kotlinx-coroutines-test, finds the global property, messes around with the build system and hopefully succeeds in setting the timeout to something more appropriate to their project. All of this is in the 20 minutes that the developer allocated to work on this.runTestthat no one will be able to explain later.From this perspective, I get where the “cumulative developer-years” figure came from and now am inclined to agree.
If 10 seconds is a timeout that’s prone to letting most tests pass most of the time (as opposed to 2 seconds, which won’t let most slow tests pass even on the developer’s machine, or 60 seconds, which is reserved only for outliers), many typical projects can be negatively affected by us choosing it. On the other hand, the excellent-engineering-quality teams won’t significantly benefit from any specific timeout that we’d set if we provide a way to configure it, as they will typically do their homework, as with everything, and will be able to tinker with the setting.
Huge kudos to @mreichelt for getting us out of this stalemate!
As mentioned before, there might be multiple reasons for having slower tests:
Most of these tests should actually fit into the 10s threshold, if run individually under good circumstances. But the circumstances might be not so good:
Don’t get me wrong: I think having a 10s timeout can actually lead to some good, if we carefully migrate. But is the change from 60s to 10s breaking things for many developers? Absolutely yes. Will they now spend cumulative developer years trying to fix all the things? Also yes.
If we consider this to be a breaking change, then it makes it more clear how a good migration could look like:
Let me list the things that it looks like we agree on:
What we don’t know yet:
I think it would be more productive to focus on these questions. They are important because a system property is a very subpar API and should be avoided. Having some magic incantation in the depths of
build.gradle.ktsis problematic:Because of all this, in this project, system properties are usually reserved for workarounds for tricky, uncommon requirements.
@dkhalanskyjb Thanks for your last comment. It looks very reasonable to me
I would also add a bit of context about “The test gets fixed,” I’m fine if a test fails, if it takes too long, or if it is an issue of the test itself (so code of the test), but from the fact that tests often rely on the test environment (classpath, additional services, the app initialization logic, mocking framework and so on). One particular test rarely can fix it Considering that the test execution order is not deterministic, it makes warmup cumbersome and error-prone. Of course, it can be fixed by encapsulating all those potential slow init to some abstraction with init memorization, but it is quite a significant technical and API challenge. This is why I see the need for a global baseline for test timeout as crucial to support a wide range of projects with different needs.
Small note: thanks to the tip by @dkhalanskyjb to force
kotlinx-coroutines-test:1.6.4in our dependencies we were able to temporarily fix our issues with a flaky test suite and still be able to use coroutines 1.7.x. This is a good workaround until we can move to a proper solution, and allows us to have a look at the slow tests on our own time. 👍A heads-up.
The lead about Molecule turned out to be very valuable. Digging in deeper, I discovered that Molecule has nothing to do with the problem, and the warm-up procedure works only somewhat incidentally. The real culprit is the coverage checking enabled for these tests: the coverage framework performs bytecode instrumentation for all classes it encounters. So, the reason warm-up works is that it touches sufficiently many classes that this instrumentation doesn’t happen as much during the test execution itself.
This certainly rules out the option to patch the timeout on a case-by-case basis: here, the time loss is not in flipping some global switch like “bytebuddy was successfully initialized,” they are spread throughout execution, even if the losses are much more pronounced when tests only start executing. The option to special-case timeouts so that a couple of them may slightly miss the time goal also doesn’t seem all that robust anymore: I’m not even sure the Element X project won’t need to introduce another warm-up procedure like the first one, but one that touches another set of classes largely disjoint from the first one, so, allowing just how many tests to miss the mark is enough? 2? 3? 5? This behavior would be a bit too brittle and magical.
The problem in the Element X project also seems general enough to potentially affect other projects: what if the code is heavily instrumented, significantly slowing it down?
So far, the solution seems to be twofold:
I looked into detecting whether bytebuddy is installed, and it seems easy enough. Mockk seems to work via
ByteBuddyAgent(https://www.javadoc.io/doc/net.bytebuddy/byte-buddy-agent/1.7.3/net/bytebuddy/agent/ByteBuddyAgent.html), andgetInstrumentationallows checking whether the instrumentation was already installed. One check beforerunTestand one check after a timeout—if they aren’t equal, then prolong the timeout a bit.If anyone has any specific arguments for why 10 seconds is not enough time for a test, now is the time to provide them.
Personally, I would be firmly against such a solution, it tries to solve what is not a problem and move the burden of decision on coroutines library instead give the owner of code base decide, who knows how many similar cases could be out there, and especially if this slow init caused by your own code, there is no way to add hack for it on level of kotlinx.coroutines. If I have my own project/library that has long init, let’s say I have some dynamic network class loader, external mock server that is starting on a separate machine, or any other complex setup, it will be affected by this and it cannot be easily fixed, as stated in previous comment by @mreichelt it requires a lot of time just to find affected tests, not mentioning to fix them
@dkhalanskyjb Let’s discuss it, but I feel that until this issue is solved, kotlinx.coroutines should revert 10 second timeout and return to 60 second one. It’s not perfect, but at least it reduces chance of false-positive timeouts
Let’s have a look at that linked PR that “fixes the test flakiness”: https://github.com/vector-im/element-x-android/pull/1226
These are 406 lines of code which contributes to the overall test complexity and draws the developers attention away from what really matters.
“Add this warmup rule whenever you write a test” is putting a lot of burden on developers. And this will also introduce a lot of new feedback loops. Sometimes developer will forget this. And they won’t directly notice it as it’s just a source of flakiness. This will lead to situations where all of a sudden a new PR starts failing which is completely unrelated to the changed code.
A property would fix the JVM tests. For us that’s the vast majority. Native tests are another story. I have no real data to answer if 10 seconds is a good default for all currently existing multiplatform targets. From our observations, running tests on iOS / watchos simulators can sometimes be really slow, due to things I don’t understand.
My preferred solution for now would be to set the default back to one minute and (maybe) introduce a property to change it on the jvm.
It’s required to be used as an expression body for multiplatform tests where the return type of the function changes based on platform.
I’d like to ask everyone once again to avoid descending into theoretic rhetoric and what-ifs that don’t add anything to the discussion. We aren’t proving theorems here, where a single theoretical counterexample breaks the whole logic, but doing engineering.
Examples of good contributions, the ones that meaningfully help:
Thank you, these are valuable leads that explain more about why some tests may take too long! On the other hand, just restating your points or aggressively trying to push your preferred solution won’t get us anywhere.
Also, I don’t understand the sense of desperation that seems to creep into the discussion, like
First of all, what’s going on here? Where is this dramatic exaggeration even coming from? Reading this, one could get an impression that an industry-wide outage is in effect, even though the overwhelming majority of tests will always finish in a second no matter if you’re running a dozen of copies of Slack in parallel.
Even if the tests that are slower are common, the change to the timeout is only since 1.7.0, and not much has happened in the test framework since 1.6.4. If this change causes much grief, are there any problems with sticking to
kotlinx-coroutines-test:1.6.4until we resolve this somehow? We won’t leave you hanging. If you have shared the details of your problem here, it will be accounted for.I think my question, “is 10 seconds enough?” wasn’t detailed enough.
Here are some examples of just how long is 10 seconds:
The point is that 10 seconds is a lot of time for a computer, and if you hit this limit somehow, there’s a good chance your computer is doing a lot of meaningless work. (And you’re also literally wasting cumulative developer-years of extra time staring at the screen and waiting for the tests to run). Of course, some tests are expected to take more than a couple of seconds to run, and that’s mostly stress tests and exhaustive enumeration tests. But such tests are not at all standard!
So, the hypothesis under which we operate is that 2-4 seconds should be enough for almost any given test. Under especially bad circumstances, it should be able to run in 8 seconds. 10 should be enough even under bad circumstances. If it’s not, then 10 seconds is not enough for that test.
At least—I should emphasize this—it’s the way that I personally currently understand the situation. I may well be wrong. Hence the question: does anyone have sprawling codebases that contain lots of tests that are not just burning the CPU time for no clear reason but actually do something meaningful? What kinds of tests are these? @mreichelt, you mentioned some tests “just doing a lot”—could you elaborate?
We could close this issue today by restoring the timeout to 60 seconds and/or introducing a JVM system property to control it (I don’t think anyone mentioned encountering this issue outside the JVM). No big deal. Most people would walk away happy, and we’d all forget about this. The problem is that this would be the lazy, suboptimal way out, and we still have the time to do better than that.
There are three conflicting goals at play:
With a 10-second timeout, we prioritized goal 1, compromising goal 2, causing some headaches. We should improve w.r.t. goal 2—while minimally compromising goals 1 and 3.
There are several different universes, and we don’t know yet in which we live:
Which one is it?
I was not aware of that requirement for multiplatform, good point. But it’s still valid for pure Java/Android projects. I have not much experience with KMP, but also I wonder if multiplatform requires it to be an expression body, or just requires a
TestResultas the return of the function? Because then the suggested idea would still apply by just returning therunTestexecuted inside the function body. What would be the problem with such approach?One time, yes, but there are multiple cases when one want to run tests only in a single module or in a file, or a particular test (during development), in all those cases single test run will load bytebuddy even when it’s not needed
Also, it’s not necessary a one-time, because multiple parallel test executions in different classloaders are possible each will have this
Any slow init will cause it, the fact that butebuddy is often mentioned, is because it’s a part of most popular mocking library, so many affected by this, it doesn’t mean that there are no other cases. I don’t see why some butebuddy specific handling is needed, let users override timeout and keep 10 sec as default.
Why not just expose tests timeout with system properties? For one project 10 seconds can be too little, for another too large.
I don’t see any actual reason why timeout should be 10 seconds specifically, not 100, or 5 seconds