nightwatch: Parallel workers not running all tests and/or failing

Greetings,

A co-worker has created a POC (https://github.com/dieguito151/nightwatch-parallel) that demonstrates an issue that is currently hitting us. We are unable to run our tests consistently when they are spawned in parallel, meaning when we set nightwatch.conf.js per the below:

...
test_workers: {
  enabled: true,
  workers: 'auto',
},
...

When running the tests you will notice one of the tests will be skipped and/or at least one will fail. When running al tests synchronously (enabled: false) they all complete successfully.

Is here something we should correct in that POC? Thanks in advance!

  • Nightwatch version: ^0.9.14
  • NodeJS: v7.9.0
  • ChromeDriver on MacOS Sierra

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 1
  • Comments: 20

Most upvoted comments

One thing I am not seeing in the configurations posted is live_output: true in the Nightwatch.conf.js if you receive a failure running in parallel the error output without that enabled will be next to impossible to debug and pinpoint issues. I definitely recommend enabling that after receiving an error code 1 with parallel workers so you can see the actual error due to the nature of how nightwatch handles globals by running it against each process its very possible to have errors with parallel workers enabled where running them individually produced no errors.

I tried several things, parallel configuration is not working for me. I was creating a script that allow running a cluster and start nightwatch processes in parallel, running the tests with different configurations and ports and retrying tests on failure. Our test suite is really complicated and does many other operations before running each case, anyways.

While working on reordering the logs and researching a selenium override configuration issue about workers; found a new piece of technology that follows the same principle of my humble cluster program. It looks like the people from Walmart had recently released a project called Magellan (more info here http://testarmada.io). Find out a boilerplate project released on January this year (@TestArmada /boilerplate-nightwatch surprisingly only 35 stars). As it seems very promising I played with it during the weekend. So I switched to something similar to this boilerplate:

I was able to re-use most of my code. And now it’s working really good, just configured it and seems very stable.

Achieved the following

1. Every test case runs in a separate worker
2. Every test case retry in case of failure (max 3 times)
3. The suite runs in parallel

This is an interesting related article in case you wan to read it Why End-to-End testing sucks (and why it doesn’t have to).

Does anyone have a solution?

I’m running 15 tests all together, 3 of them would halt, 1 failed, others are okay. It all passes when running sequentially

And why this issue is closed?

@aamorozov Does it work for you? Could you tell me how I should add parallel_process_delay? And should I specify one worker per CPU core?