BackstopJS: BackstopJS hangs in Docker container
I seem to keep having this problem where the chrome headless hangs forever until the script ends. Here’s the output I’m seeing:
Starting Chromy: {"chromeFlags":["--disable-gpu","--force-device-scale-factor=1","--disable-infobars=true","--no-sandbox","--window-size=320,480"],"port":9222,"waitTimeout":30000,"visible":false}
Starting Chromy: {"chromeFlags":["--disable-gpu","--force-device-scale-factor=1","--disable-infobars=true","--no-sandbox","--window-size=1024,768"],"port":9223,"waitTimeout":30000,"visible":false}
Starting Chromy: {"chromeFlags":["--disable-gpu","--force-device-scale-factor=1","--disable-infobars=true","--no-sandbox","--window-size=1600,900"],"port":9224,"waitTimeout":30000,"visible":false}
9223 Chrome v61 detected.
9223 ***WARNING! CHROME VERSION 62 OR GREATER IS REQUIRED. PLEASE UPDATE YOUR CHROME APP!***
9222 Chrome v61 detected.
9222 ***WARNING! CHROME VERSION 62 OR GREATER IS REQUIRED. PLEASE UPDATE YOUR CHROME APP!***
9224 Chrome v61 detected.
9224 ***WARNING! CHROME VERSION 62 OR GREATER IS REQUIRED. PLEASE UPDATE YOUR CHROME APP!***
bash-4.3#
This is utilizing the same dockerfile as the one listed in the repo. I’m not sure why it’s hanging but I don’t get any real output. Changing the engine to PhantomJS works, and from looking at the debug output from the debug flag, it is reaching the webpage. I tried adding some flags to the docker run from reading this project: https://github.com/yukinying/chrome-headless-browser-docker . I also tried to add the --no-sandbox option to the chromeFlags but to no avail.
So I’m not sure if this is a chrome-headless issue? An issue with chromium v61? Alpine? I’m also not sure how to debug this further.
If you have any insight as to things I could be doing to make progress, please let me know.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 18 (6 by maintainers)
Commits related to this issue
- Changes in dockerfile: - Changed base image to none alpine. Allowing new chrome version. - Updated Chrome to the latest (64). Potentially fixing issues #603 #537 — committed to iain17/BackstopJS by iain17 6 years ago
- Changes in dockerfile: (#668) - Changed base image to none alpine. Allowing new chrome version. - Updated Chrome to the latest (64). Potentially fixing issues #603 #537 — committed to garris/BackstopJS by iain17 6 years ago
- [start] `.travis.yml` Does: - Require a decent version of Node (v0.10.48 or something is... a tad old) - Plan for sufficient resources for Chrome (https://github.com/garris/BackstopJS/issues/603#issu... — committed to epfl-si/elements by deleted user 3 years ago
- [start] `.travis.yml` Does: - Require a decent version of Node (v0.10.48 or something is... a tad old) - Plan for sufficient resources for Chrome (https://github.com/garris/BackstopJS/issues/603#issu... — committed to epfl-si/elements by deleted user 3 years ago
- [start] `.travis.yml` Does: - Require a decent version of Node (v0.10.48 or something is... a tad old) - Plan for sufficient resources for Chrome (https://github.com/garris/BackstopJS/issues/603#issu... — committed to epfl-si/elements by deleted user 3 years ago
I just added a link to this issue in the Docs.
Ok… I solved the issue. It was a docker issue all along. 😓 Linking the reference material that got me to solving the problem (which was ironically because I was doing research into alternatives to chromy and found puppeteer): https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md#tips
essentially adding
--shm-size=2gband--cap-add=SYS_ADMINto mydocker runcommand let chrome have enough shared memory to finish running the test.What was probably happening was that the chrome process was running out of shared memory and couldn’t finish loading the page (huge page, lots of assets, etc…) So then when that occurred, it was being treated as a gotoTimeout because it couldn’t load the page with the shared space provided by docker by default (which is 64mb by default).
After increasing the shared memory size to 2gbs the process completed successfully. 😄
@kiran-redhat When I saw this issue, either larger pages or large screencap sizes seemed to require more shared memory – I don’t remember which or if I ever tried to figure out the difference. Obviously more running at once (higher asyncCaptureLimit) also made it happen more often, but I remember I would have the problems with some pages/captures even with asyncCaptureLimite = 1 until I raised the docker shared memory available.
I did not need 2gb for my tests to run at asyncCaptureLimit=10, just 512mb, but if your pages/captures are large enough maybe you need even more, or maybe there’s another problem.
I don’t believe the actual error I got involved “GotoTimeoutError”. I think it was
WaitTimeoutError: evaluate(). I could be wrong – I no longer have the error in front of me, but I remember it matching #537 pretty closely.When debugging, I was able to run
dfand see the usage of /dev/shm going up and runpsto see chrome processes running and then getting killed (all in the backstop container while backstop was running). I also turned on enough logging (some by tweaking the src of installed js libs – again, in the container) to actually see chrome being killed with a bus error trying to address invalid or out of bounds memory addresses.Good luck!
Even with a gotoTimeout of 5min+ the process silently fails and eventually receives the exit signal with code 0.
Added to
cli/index.js:50:And then in my docker container’s bash:
Note that the process doesn’t even enter the
reportportion.Edit: Kept tracking all the way down into Chromy itself. I’m not sure why but the process just hangs on the
this.client.Page.navigate({url: url})call. That’s controlled by thechrome-remote-interfacewhich just fails after the timeout and causes chromy to throwGotoTimeoutError. But it still ends the process, which is annoying. 😕