cypress: Cypress crashes with "We detected that the Chromium Renderer process just crashed."
Test code to reproduce
I cannot provide the specific test code I am using to produce this, since it contains internal company data. However, what I can tell you is these tests run for quite a long time (5-10 minutes), as they contain a lot of steps through our E2E flow in our application.
Cypress Mode
cypress run
Cypress Version
12.13.0
Browser Version
Chrome 110.0.5481.96
Node version
v16.18.1
Operating System
Debian GNU/Linux 11 (Docker image cypress/browsers:node-16.18.1-chrome-110.0.5481.96-1-ff-109.0-edge-110.0.1587.41-1)
Memory Debug Logs
Only the end of the failing test results from the memory dump have been included due to limitations of this input form.
[
{
"checkMemoryPressureDuration": 7764.029118001461,
"testTitle": "Crashing test (example #1)",
"testOrder": 18.2,
"garbageCollected": true,
"timestamp": 1690553650738
},
...
{
"getRendererMemoryUsageDuration": 2.619260013103485,
"totalMemoryWorkingSetUsed": 6995816448,
"getAvailableMemoryDuration": 58.69076597690582,
"jsHeapSizeLimit": 4294705152,
"totalMemoryLimit": 9223372036854772000,
"rendererProcessMemRss": 5469155328,
"rendererUsagePercentage": 127.34646813770371,
"rendererMemoryThreshold": 2147352576,
"currentAvailableMemory": 9223372029858955000,
"maxAvailableRendererMemory": 4294705152,
"shouldCollectGarbage": true,
"timestamp": 1690553801030,
"calculateMemoryStatsDuration": 58.72436600923538
},
{
"getRendererMemoryUsageDuration": 2.208419978618622,
"totalMemoryWorkingSetUsed": 5089853440,
"getAvailableMemoryDuration": 61.31387501955032,
"jsHeapSizeLimit": 4294705152,
"totalMemoryLimit": 9223372036854772000,
"rendererProcessMemRss": 0,
"rendererUsagePercentage": 0,
"rendererMemoryThreshold": 2147352576,
"currentAvailableMemory": 9223372031764918000,
"maxAvailableRendererMemory": 4294705152,
"shouldCollectGarbage": false,
"timestamp": 1690553802092,
"calculateMemoryStatsDuration": 61.33369600772858
},
{
"getRendererMemoryUsageDuration": 2.69021999835968,
"totalMemoryWorkingSetUsed": 1682976768,
"getAvailableMemoryDuration": 50.05962598323822,
"jsHeapSizeLimit": 4294705152,
"totalMemoryLimit": 9223372036854772000,
"rendererProcessMemRss": 0,
"rendererUsagePercentage": 0,
"rendererMemoryThreshold": 2147352576,
"currentAvailableMemory": 9223372035171795000,
"maxAvailableRendererMemory": 4294705152,
"shouldCollectGarbage": false,
"timestamp": 1690553803143,
"calculateMemoryStatsDuration": 50.07922601699829
},
{
"getRendererMemoryUsageDuration": 2.889739990234375,
"totalMemoryWorkingSetUsed": 1682792448,
"getAvailableMemoryDuration": 60.31445497274399,
"jsHeapSizeLimit": 4294705152,
"totalMemoryLimit": 9223372036854772000,
"rendererProcessMemRss": 0,
"rendererUsagePercentage": 0,
"rendererMemoryThreshold": 2147352576,
"currentAvailableMemory": 9223372035171979000,
"maxAvailableRendererMemory": 4294705152,
"shouldCollectGarbage": false,
"timestamp": 1690553804204,
"calculateMemoryStatsDuration": 60.33361500501633
},
{
"getRendererMemoryUsageDuration": 2.6974300146102905,
"totalMemoryWorkingSetUsed": 1682558976,
"getAvailableMemoryDuration": 225.94400304555893,
"jsHeapSizeLimit": 4294705152,
"totalMemoryLimit": 9223372036854772000,
"rendererProcessMemRss": 0,
"rendererUsagePercentage": 0,
"rendererMemoryThreshold": 2147352576,
"currentAvailableMemory": 9223372035172213000,
"maxAvailableRendererMemory": 4294705152,
"shouldCollectGarbage": false,
"timestamp": 1690553805431,
"calculateMemoryStatsDuration": 225.9711429476738
}
]
Other
Our test specs that contain multiple long running tests are prone to crashing mid-run in CI. This seems to be more likely when there are test retries in the run. We are running with both experimentalMemoryManagement set to true and numTestsKeptInMemory set to 0. We also have the memory and CPU allocation in our GitLab runners set quite high (see below). Despite this, we still get the crashes. Example:
Some top level test description
(Attempt 1 of 4) A test scenario containing a scenario outline template (example #1)
(Attempt 2 of 4) A test scenario containing a scenario outline template (example #1)
✓ A test scenario containing a scenario outline template (example #1) (849857ms)
✓ A test scenario containing a scenario outline template (example #2) (360954ms)
✓ A test scenario containing a scenario outline template (example #3) (556574ms)
(Attempt 1 of 4) A test scenario containing a scenario outline template (example #4)
We detected that the Chromium Renderer process just crashed.
This can happen for a number of different reasons.
If you're running lots of tests on a memory intense application.
- Try increasing the CPU/memory on the machine you're running on.
- Try enabling experimentalMemoryManagement in your config file.
- Try lowering numTestsKeptInMemory in your config file during 'cypress open'.
You can learn more here:
https://on.cypress.io/renderer-process-crashed
Here are the memory allocations we are providing in Gitlab CI:
KUBERNETES_CPU_REQUEST: "3"
KUBERNETES_CPU_LIMIT: "4"
KUBERNETES_MEMORY_REQUEST: "12Gi"
KUBERNETES_MEMORY_LIMIT: "32Gi"
It should be noted these tests run within docker in CI, and are running in the cypress/browsers:node-16.18.1-chrome-110.0.5481.96-1-ff-109.0-edge-110.0.1587.41-1 version of the cypress image.
We are utilizing the cypress-cucumber-preprocessor library, but I do not believe that is having any impact on this issue.
About this issue
- Original URL
- State: open
- Created a year ago
- Reactions: 17
- Comments: 64 (8 by maintainers)
@jennifer-shehane Hi! This has become a major blocker for my team in the past few months. We’re all running M2 MBPs with 32 GB of RAM, and are unable to test locally in any efficient manner because of these memory issues. Like others have stated, we’ve enabled experimentalMemoryManagement and reduced the numTestsKeptInMemory to 0 with no luck.
I understand that there may be multiple factors at play here, but this thread dates back to last year, with suggestions to use chrome versions almost 20 iterations old or non standard browsers. This can waste anywhere from 1-2 hours per developer per day, which quickly scales up to thousands of hours per year lost across the org alone. In an era where >50% web traffic comes through chromium browsers, using non-chromium browsers as a workaround is not a real solution. Other than your diligence in this thread which is greatly appreciated, what actions has the cypress team taken to resolve this issue?
@jennifer-shehane
I have produced an example that reliably reproduces this issue: https://github.com/Lasidar/cypress-memory-crash-repro.
Now, to answer your specific suggestions…
Updated to the latest but still seeing the crash.
We are using the latest version of Chrome available, 122.
Already on.
The issue seems to mostly happen when test replay is off, since I believe test replaying hiding the sidebar reduces the memory footprint of the renderer process.
We tried both with and without. The crash still happens in either case.
Practically speaking, this is not always an option. Some of our E2E tests have an indivisible flow which is large by the nature of what is being tested. Plus I do not accept this as a solution. We need to root cause the crash instead of pushing it back on the test developers to workaround this issue. I hope the example I have provided will allow the Cypress team to get more insight into the cause.
When will cypress fix this issue ?? or give us a work around that doesnt involve downgrading away from “cypress 13”. I am a big fan of Test replay.
Hi @jennifer-shehane. I have some more data to share with you.
Since it was suggested that this is an issue with the plugin
cypress-cucumber-preprocessor, of which we were using version4.3.1, we updated to the latest maintained version@badeball/cypress-cucumber-preprocessor@20.0.2, but the issue was still observed. Additional logs have been provided under/logs_after_plugin_updatein https://github.com/Lasidar/cypress-memory-crash-repro.As a comparison point, we also created a new version of this example without Cucumber under
YahooItemsFailNoCucumber.ts. This example also crashed. It should be noted this was with the plugin still installed but not used. Results for this run are found inCrashingRun_NoCucumber_WithVideo_Chrome.txt.As a final comparison, we uninstalled the Cucumber plugin completely and re-ran
YahooItemsFailNoCucumber.ts. The memory crash still occurred. Results can be found inCrashingRun_NoCucumberUninstalled_WithVideo_Chrome.txt. In order to run this example, you can check out the branch calledcucumber-uninstalled, and executenpx cypress run --e2e --browser chrome -c video=true --spec cypress/tests/YahooItemsFailNoCucumber.ts.I hope this additional detail helps us get closer to a solution, as neither upgrading the cucumber plugin or removing it entirely seems to have resolved the issue.
@jennifer-shehane does the cypress team have any other insights as to what the cause might be? If we’ve ruled out cucumber, video capture doesn’t seem to be the cuprit, we have
experimentalMemoryManagementon, and have updated everything to the latest versions, what does that leave? Is it possible there is a memory leak in the cypress process(es) themselves?It seems as if Chrome has been overwhelmed with the layout of our application including Cypress since version 115. Our application runs unstably when command-log window is displayed. As soon as an action is performed in the application that writes a new entry in the command log, the browser crashes. This is usually not easy to reproduce, but we were able to reproduce it in a test at some point. Debugging the Chrome crashdumbs didn’t help us either. Everything was stable up to version 114 of Chrome. Our solution is to set CYPRESS_NO_COMMAND_LOG: 1 now. Maybe it will help someone. We hope that at some point the problem will solve itself with a Chrome update or with a simpler cypress layout. Unfortunately, this is not that easy to reproduce in a small sample application.
@nagash77 I understand that actually finding the root cause here will be extremely difficult. One thing I did find interesting when I was analyzing the memory usage leading up to the crash is between tests, the memory usage never returns back down to a baseline level. It is continually increasing. This is even with
experimentalMemoryManagementset totrueandnumTestsKeptInMemoryset to0. I find this strange since I am not sure what would be persisting in memory between tests.If a true root cause fix is not possible, I have a few possible suggestions:
before:browser:launchtrigger, it could be set more transparently via a test configuration option.rendererProcessMemRssused,jsHeapSizeLimit, etc).@Lasidar I’m not sure, we’d have to spend some more time investigating.
I am having the same issue, it is crashing with Chrome 115 but if I download the stable 114 from here: https://chromium.cypress.io/mac/ and then run it with --browser [path to the downloaded chrome] it works.
I’ve got a case when Browser crashes into “Oh Snap” with a single test after performing a simple operation of opening a popup. The only change that aids a problem is adding
CYPRESS_NO_COMMAND_LOG, nothing else helps. Also important to mention that the problem persists from Cypress v10 to v12 and v13.According to this
I’m getting this error for the first time
We detected that the Chromium Renderer process just crashed.Cypress v 12.18@Lasidar, thanks for your follow-ups on this issue, it’s helpful.
I’d like to add my 2 cents to all this. We have recently come across this problem having such a setup:
Cypress v13.2.0cypress/included:13.2.0Chrome 116Linux Debian2)4 CPU3)15 RAMWhen we set this all up we even could not run this concurrency cause ran into “We detected that the Chromium Renderer process just crashed.”
What helped us to fix it?
Unfortunately, I haven’t noticed any change after adding
experimentalMemoryManagementorCYPRESS_NO_COMMAND_LOG=1 cypress run. In addition to this, when watching Test reply I noticed that all commands are still visualized and present on the right side of Cypress Test Runner. I assumed that it must disappear, maybe I did not get the final result of this flag.Currently, I can’t tell that this error is a Bug as we usually understand it but rather inefficient memory management or just a heavy load that corresponds to the actual required good Cypress performance that currently can be easily fixed by just aligning your workload with the hardware requirements of your instance.
I can easily provide a reproducible project where addition plus one concurrent stream to my (4 stages x 4 CPU) parallelization setup will produce this error right away. But firstly, look at this performance analysis on a server, monitored by me during Cypress parallel run where we can see huge overloading of CPU by:
Maybe this will give some way forward for some of us. Result 1 Result 2 Result 3 Result 4 Result 5
The only other option I can suggest in that case is to try disabling the cypress sidebar https://docs.cypress.io/guides/references/troubleshooting#Disable-the-Command-Log
Others above have also had luck downgrading to chrome 109 or earlier.
@akondratsky I’m in the same boat, I run my tests on Azure runners (Ubuntu + Electron 106) on Cypress 13.1.0 with 16GB of memory and they still crash that is ridiculous ! I enabled everything they said numTestsKeptInMemory: 0, experimentalMemoryManagement: true,
@nagash77 Any news on this ongoing issue ?
I started to have the same problem this week. Already tried your solution on cypress.config.ts, with --max_old_space_size=4096. And still crash on a specific element. He crashes in Chrome and Edge, but work’s fine in electron.
@jennifer-shehane
I don’t have any custom preprocessors in my project and I’m facing this chromium renderer crashing issue almost half a year now. I’ve tried all solutions from this thread and nothing really solves the issue.
The only thing that kind of works is splitting up specs. For example, I had a spec with 10 tests that were running without any problems with older Cypress versions (below 12). Now I have 2 specs with 5 tests and they are still occasionally crashing. I don’t think that dividing them further is really solving anything, it’s just a workaround that crates a mess in repository.
Unfortunately I cannot share link to my project
@Lasidar Thank you for providing a reproducible example. We are able to reproduce the Chromium renderer crashing with this example.
We believe the memory usage in your example is coming from the use of the cucumber-preprocessor plugin. We removed the custom preprocessor and created an equivalent spec and were able to run that spec without the memory crash. The memory hit the 50% threshold and then went down as expected with
experimentalMemoryManagement. In the cucumber spec, the memory kept climbing indicating some sort of memory leak.We’ll need to do more investigation to determine if the issue is with the cucumber plugin itself or with how we are handling custom preprocessors. I have heard of issues in the past with the cucumber plugin performance.
This doesn’t offer a great solution in the interim of us investigating. One could remove the cucumber-preprocessor, but I know this is a big business decision with other ramifications.
Can confirm that Chrome is still crashing for me on Chrome 118 with Cypress 13.4.0 (cypress/browsers image tag
cypress/browsers:node-20.9.0-chrome-118.0.5993.88-1-ff-118.0.2-edge-118.0.2088.46-1).I have updated Chrome version to latest (118) with Cypress 13 and resolved problem with crash.
Our team faced this issue, too. We use Cypress 12.17.4 with Electron, and all tests run in the Docker container (built on top of the CentOS image). We tried suggestions from the article Fixing Cypress errors part 1: chromium out of memory crashes, but none of them helped (we did not check Chromium option).
It is worth noting that issue appears only on one machine. Three of the other works fine, without crashes at all. Same image, being run in our Jenkins pipeline, also works smoothly. We’ve already checked our Docker configurations - they are the same. All machines except Jenkins agent under the MacOS. And it is obvious that different processors also do not play a crucial role here.
I wonder what can be the difference here. If anyone has any idea what we can check - please ping me. We will return to investigating tomorrow.
I have a speculation as to what is causing this which is based on a few logical leaps, but I think they are reasonable. I think the issue is mainly caused by the open issues involving cypress and K8s (see https://github.com/Zenika/alpine-chrome/issues/109).
Looking at the logs in question, I noticed the jsHeapSizeLimit limit is being exceeded right before the crash.
This is strange, since we have 12GB allocated to the Gitlab runner, with the ability to scale up to 32GB if the process calls for it. But looking at the issue linked above, I believe this prevents the Chrome renderer from being able to scale up its heap memory usage, even if the system has capacity.
From looking at the Cypress source code, it seems the renderer process memory is pulled from the chrome jsHeap, and if we exceed this, we are likely smashing the stack which probably leads to the crash.
So my workaround, at least temporarily, is to increase the chrome jsHeapSizeLimit value. I was able to achieve this through the following additions to my
cypress/plugins/index.jsfile:This seems to have made my memory crashing issues go away for the time being. I believe the correct fix for this is for https://github.com/Zenika/alpine-chrome/issues/109 to be resolved, since as I mentioned above, I suspect this issue prevents the process from being able to properly scale up its heap size.