nock: Mocked request starts timing out after picking up v10.0.5
What is the expected behavior?
A minor version update of nock library shouldn’t impact my existing tests.
What is the actual behavior?
All my tests uses fakeTimers are now timing out.
Possible solution
Revert the fix for https://github.com/nock/nock/issues/677, or make it optional if it has to go as a minor version update.
How to reproduce the issue
Have sinon.useFakeTimers() in the test case.
Having problem producing a test case? Try and ask the community for help. If the test case cannot be reproduced, the Nock community might not be able to help you.
Does the bug have a test case?
Versions
| Software | Version(s) |
|---|---|
| Nock | 10.0.5 |
| Node | 8 |
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 18 (12 by maintainers)
Commits related to this issue
- fix: Mock responses should fire when timers are mocked (#1335) Revert #1270. Fix #1334. — committed to nock/nock by Chengxuan 5 years ago
- fix: Mock responses should fire when timers are mocked (#1336) Fix picked from #1335. Revert #1270. Fix #1334. — committed to nock/nock by Chengxuan 5 years ago
- fix: Mock responses should fire when timers are mocked (#1336) Fix picked from #1335. Revert #1270. Fix #1334. — committed to nock/nock by Chengxuan 5 years ago
- fix: Mock responses should fire when timers are mocked (#1336) Fix picked from #1335. Revert #1270. Fix #1334. — committed to nock/nock by Chengxuan 5 years ago
🎉 This issue has been resolved in version 10.0.6 🎉
The release is available on:
Your semantic-release bot 📦🚀
I’ll work on a PR then.
It’d be helpful to include a test, as well, that
lolex.install()does not interfere withnock.Thanks @Chengxuan.
Could someone send a PR to revert #677? We will release it as a fix release
Regarding the statement:
I completely agree, but I think that I think that should be discussed in a different MR. In addition, it is certainly a breaking change if !677 is.
If the maintainers of nock deem it appropriate, I am happy to help contribute a fix. I think there are a few choices to fix it, but I agree with @chenghung. I think the best “fix” is to revert, mark it as a breaking change, and re-publish this change in v11. However, I can think of a few reasons the maintainers might not desire this.
I will defer to the maintainers if they think we should fix this. I can see it 2 ways.
First, assume that all assertion frameworks using fake timers equally. If we knew what percent of javascript projects that use sinon, also use fake timers, we could support or reject 2. I would say that if >4% of projects use fake timers, then view 2 is supported. Others are welcome to submit different percentages. I tried to collect some data using open source projects, but it seemed inconclusive to me. If you are interested in the inconclusive results keep reading.
Using
'mocha' filename:package.json extension:json language:JSONas the search criteria across github, I found 1,200,331 million files. Since most projects are not likely to have more than one package.json, we can assume there are about 1.2 million projects using mocha.Searching for
sinon useFakeTimers extension:js language:JavaScripton github provides 90,904 files. Searching forlolex install extension:js language:JavaScripton github provides 11,166 files. This might be a close proxy for number of files (not projects) using fake timers. Since this is an upper bound for the number of projects using fake timers, we can pretty safely say that this will break no more than 8.5% of projects using mocha.If we assume that jest users use mock timers about the same as mocha users, we can another data point. Using
jest filename:package.json extension:json language:JSONas a search result we get 410,030 projects using jest. Usingjest useFakeTimers extension:js language:javascriptas a search result we get5993files using fake timers. So jest, has a upper bound of 1.5% projects breaking using this change.It is possible that projects using any testing framework, sinon, and fake timers do not break (for various reasons). However, this only increases makes the reported percentages higher upper bounds. If we want to go further, someone could try to clone a fraction of repositories and try to find failures after upgrade nock, but that is fraught with perils.
Looking at the quality of the search result, it is unclear which one is “closer” to reality. Perhaps we could use activity on this issue as a gauge for the number of projects updating that broke, but I am not sure how to gage total projects that upgraded without problems. To go further, one try to clone a group of repos