cli-microsoft365: Failing tests on spo list view field

As per the instructions in ‘Minimal Path to Awesome’ After forking, I ran npm test 4 tests fail As below

  1. spo list view field add uses correct API url when list id option is passed: Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure “done()” is called; if returning a Promise, ensure it resolves. (/Users/garrytrinder/Documents/GitHub/office365-cli/dist/o365/spo/commands/list/list-view-field-add.spec.js)

  2. spo list view field add uses correct API url when list title option is passed: Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure “done()” is called; if returning a Promise, ensure it resolves. (/Users/garrytrinder/Documents/GitHub/office365-cli/dist/o365/spo/commands/list/list-view-field-add.spec.js)

  3. spo list view field remove uses correct API url when list id option is passed: Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure “done()” is called; if returning a Promise, ensure it resolves. (/Users/garrytrinder/Documents/GitHub/office365-cli/dist/o365/spo/commands/list/list-view-field-remove.spec.js)

  4. spo list view field remove uses correct API url when list title option is passed: Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure “done()” is called; if returning a Promise, ensure it resolves. (/Users/garrytrinder/Documents/GitHub/office365-cli/dist/o365/spo/commands/list/list-view-field-remove.spec.js)

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 32 (32 by maintainers)

Most upvoted comments

We should not increase the timeout for tests. With proper mocking we should not need more time per test. An issue like the one you noticed is often a sign of something else, so whenever we hit something similar we should have a closer look at the failing test.

Sure, no problem 👍🏻

It’s interesting indeed. Let’s keep an eye out on it to see if we can repro it. For now, let’s close this issue, ok?

Ah sorry, my bad. I’ll have another look.

Thanks for double checking @garrytrinder. I’ll have a look at it if I can repro it and find the reason why it’s happening. Appreciate your help 👏

I agree. Increasing the timeouts was just for troubleshot purposes to see it is ever going to pass 😃

Happy to help 😃

I agree, I’m happy to just leave this open and check after next release 👍🏻

Thanks for your input @garrytrinder !

It is interesting how the master worked without that so far. I am pretty sure tested the master and the dev on my Mac and the tests passed. I will try to investigate the root cause further when I have more time.

I’d suggest, we can close that issue once we merge master with dev since you would not experience these issues. We can always open new one if the issue is there again.

This is very interesting. I have not seen that before, we can monitor that behavior and see it someone else will experience the same.

@garrytrinder, this should not be a show stopper for you to build command since you better run only the tests you are adding for the new command. What I usually do is to change the package.json to "test": "nyc -r=lcov -r=text mocha \"dist/**/<NAME_OF_FILE_OF_MY_NEW_COMMAND>.spec.js\"" to speedup the tests execution when I develop commands. I do not commit that to the upstream branch.