puppeteer: Puppeteer slow execution on Cloud Functions
I am experimenting Puppeteer on Cloud Functions.
After a few tests, I noticed that taking a page screenshot of https://google.com takes about 5 seconds on average when deployed on Google Cloud Functions infrastructure, while the same function tested locally (using firebase serve
) takes only 2 seconds.
At first sight, I was thinking about a classical cold start issue. Unfortunately, after several consecutive calls, the results remain the same.
Is Puppeteer (transitively Chrome headless) so CPU-intensive that the best ‘2GB’ Cloud Functions class is not powerful enough to achieve the same performance as a middle-class desktop?
Could something else explain the results I am getting? Are there any options that could help to get an execution time that is close to the local test?
Here is the code I use:
import * as functions from 'firebase-functions';
import * as puppeteer from 'puppeteer';
export const capture =
functions.runWith({memory: '2GB', timeoutSeconds: 60})
.https.onRequest(async (req, res) => {
const browser = await puppeteer.launch({
args: ['--no-sandbox']
});
const url = req.query.url;
if (!url) {
res.status(400).send(
'Please provide a URL. Example: ?url=https://example.com');
}
try {
const page = await browser.newPage();
await page.goto(url, {waitUntil: 'networkidle2'});
const buffer = await page.screenshot({fullPage: true});
await browser.close();
res.type('image/png').send(buffer);
} catch (e) {
await browser.close();
res.status(500).send(e.toString());
}
});
Deployed with Firebase Functions using NodeJS 8.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 28
- Comments: 68 (28 by maintainers)
Many customers are successfully using puppeteer on Cloud Functions or App Engine.
We tested headless Chrome performances and were aware of them before publishing the blog post. To sum up: let’s say that this is part of the current tradeoff of using our pay-for-usage fast-scaling managed compute products (Cloud Functions and App Engine standard environment)
If performance is what you are optimizing for, Google Cloud Platform has many other compute options that allow you to run puppeteer with better performances: take a look at the App Engine flexible environment, Google Kubernetes Engine or just a Compute Engine VM
I ran some benchmarks again with chrome-aws-lambda and I noticed some improvements on Firebase.
The average timings I got with multiple URLs and warmed up functions were:
puppeteer
(2684 ms on Firebase 1GB)chrome-aws-lambda
(1675 ms on Firebase 1GB)chrome-aws-lambda
(1154 ms on AWS Lambda 1GB)With
chrome-aws-lambda
, FCFs are “only” 45% slower than Lambdas (compared to 130%+ when usingpuppeteer
). In light of this, I’ve added support for GCFs to my package, if anyone wants to try it out:Sample code (you need Node 8 runtime for it):
This combination improves a little the speed:
I’m getting loading times of 3 seconds in local and 13 seconds in GCF.
@steren @ebidel
So I just cooked up the simplest possible benchmark to test only the CPU (no disk I/O or networking).
Here’s what I came up with:
I deployed this function on both AWS Lambda, and Firebase Cloud Functions (both using Node 8.10).
Then I serially called the Lambda/Cloud Function and noted down the times. No warm-up was done.
The 1GB Lambda is on-par with the 2GB FCF - although with much more consistent timings and no errors.
Weirdly enough, the errors reported on 1GB FCF were:
Not sure why that happens intermittently for a deterministic function. As for the 2GB FCF, the errors were:
Similar results are reported on papers such as (there are quite a few!):
PS: Sorry if this is unrelated to PPTR itself, I’m just trying to suggest that CPU performance could be an important factor that explains why puppeteer performs so badly under GCF/FCF.
Google Cloud PM here.
Part of the slowness comes from the fact that the filesystem on Cloud Functions is read only. We noticed that Chrome tries a lot to write in different places, and failures doing so results in slowness. We confirm that by enabling a writable filesystem, performances improve. However, at this time, we are not planning to enable a writable filesystem on GCF apart from
/tmp
.We asked the Chromium team for help to better understand how we could configure it to not try to write outside of
/tmp
, as of now, we are pending guidance.@steren I assume you were the one who marketed this back in August with this blog post: https://cloud.google.com/blog/products/gcp/introducing-headless-chrome-support-in-cloud-functions-and-app-engine
Isn’t it a bit awkward to push a product to masses without actually testing performance aspect of it, especially in a product (Cloud Functions) that people would like to use it at scale?
Same here. We wanted to migrate from AWS Lambda to GCF because the underlying linux distribution used by AWS Lambda is a pain to work with. We did quite extensive stress tests on GCF and we experienced extremly slow functions compared to AWS Lambda. It’s so much slower than it’s currently not possible for us to migrate even if we would prefer to work with the underlying linux distribution GCF uses.
Just wanted to add some details on how to run the code below locally (on Ubuntu, in my case) and on Firebase 👇
First, install Chromium, with your usual package manager (ex:
apt install chromium-browser -Y
). Then, check where did it installed withwhereis chromium-browser
, it should be something like/usr/bin/chromium-browser
Create a.runtimeconfig.json
into yourapp_folder_repo/functions
like that oneThen in your code, you can run
Try it locally with
firebase emulators:start --only functions
Deploy it on Firebase withfirebase deploy --only functions
🚀It should now work on both environments ! 🎊
Here are my benchmarks using Cloud Run, Cloud Functions and the Kubernetes/Any other server.
Cloud run is 2x slower, Cloud functions are 6-10x slower compared to a normal always-on server.
Tasks performed:
Benchmarks:
Kubernetes/Server
Mainly this would mean high availability, no cold start, though it defeats the purpose of serverless, the comparison is just to show how Cloud Functions are doing compared to this.
Cloud Run
It’s slower, and understandable. Got much more flexibility than Cloud Functions as well.
Cloud Functions
Never mind the cold start, it was extremely painful to watch. No matter what kind of optimizations are put, just opening the browser takes most of the time.
If anyone runs a test with chrome-aws-lambda, it will be nice.
Many thanks for the alternative, mate! I guess, fixing a typo in
iltorb
(instead of ilotorb) may save some time for other folks.https://github.com/GoogleChrome/puppeteer/issues/3120#issuecomment-450575911
I can confirm that this works.
Before using
chrome-adw-lambda
my screenshots were rendered in about 12 second. After it went down to about 2 seconds. That’s about 500% faster!@TimotheeJeannin I also ran the Chromium I compile for AWS with the exact same approach / paths and everything. And, all things being equal, GCF is way slower. I don’t know why Google devs are trying to dismiss this issue as a disk I/O issue, if that was the case the Sieve of Eratosthenes I shared before would have no justification for being so slow as well.
@eknkc Thanks for sharing your experiments.
Here are the options I tried too. None are helping:
As a quick test, I switched the function memory allocation to 1GB from 2GB. Based on the pricing documentation, this moves the CPU allocation to 1.4 GHz from 2.4 GHz.
Using 1GB function, taking a simple screenshot on Cloud Functions takes about 8s! The time increase seems to be a direct function of the CPU allocation :x
Maybe there is a magic option to get better timing and have Puppeteer really usable on production with Cloud Functions?
@ebidel Any updates on this problem?
We want to move our project from AWS Lambda to Google Cloud functions. Actually, we completed migration. But we are waiting for this issue.
Any news on this ? It would be nice if GCF could run puppeteer correctly. I tried to launch chrome with the
userDataDir: '/tmp'
option but it doesn’t seem to have any effect on performance.Thank you for the tip on speeding up Puppeteer on FCF.
Is there a way to test this function locally using
firebase serve --only functions
on a MAC?I am getting the following error:
which lists troubleshooting for Linux.
How are people using MAC OS testing this implementation?
@lpellegr Very nice to see this brought up.
I’ve been facing the same pain for a while but always thought it would be closed as “won’t fix”.
I have a quite extensive puppeteer setup on AWS Lambda and I’ve been playing around with running puppeteer on Firebase/Google Cloud Functions for a while, even before support for Node 8.10 was announced. You can check the hack I did back then here (unmaintained).
A run a proxyfied authentication service (user logs in into my website, that in turn uses puppeteer to check if he can authenticate with the same credentials on a third-party website), where execution speed of puppeteer will directly affect the user experience. Nothing fancy like screenshots or PDF, just a login flow.
Most of my architecture lives on Firebase, so it would be very convenient for me to run everything there, puppeteer included - this would help with the spaghetti-like fan-out architecture I’m forced to adopt due to Lambda limitations. However, the performance of GCF/FCF is so inferior compared to AWS Lambda that I cannot bring myself to make the switch.
Even after support for specifying closer regions and Node 8.10 was released on FCF, a 2GB Cloud Function will still be less performant than a 1GB Lambda: ~4s vs 10+ seconds! And Lambda even has the handicap of having to decompress the chromium binary (0.7 seconds, see
chrome-aws-lambda
).And from my extensive testing I can tell this is not due to cold-starts.
I suspect the problem is more related in the differences between AWS and Google in the way the CPU shares and bandwidth are allocated in proportion to the amount of RAM defined. I can’t be sure obviously, but I read a blog post a few months ago (can no longer find it) with very comprehensive tests on the big three (AWS, Google, Azure) that seem to reflect this suspicion - AWS is more “generous” in allocation.
Obviously, this doesn’t seem to be a problem of
puppeteer
itself, but since Google is trying hard to scale up it’s serverless game (and still playing catch-up it seems) it would be awesome if you could nudge some colleague at Google to look into this @ebidel - my current AWS infrastructure relies on hundreds of lines of Ansible and Terraform code as well as a couple Makefiles to keep everything together.Switching to the no-frills approach of just writing triggers for Cloud Functions and listing dependencies (amazing work on this BTW) would make my life a lot easier. If only the performance was (a lot) better…
Any updates on this from the Google Team or has anyone cracked this? I’m a first time user of puppeteer and trying to glue up puppeteer-core, puppeteer-extra, puppeteer-cluster, (and apparently now chrome-aws-lambda) in Firebase Functions and the performance is disappointing to say the least…
I can confirm significant improvements in Firebase Functions / GCF. Enough so that I’m now using it in several mission critical production workflows for several weeks now.
@steren if helpful for future launches, I’m grateful for the announcement with the known issues and the follow up improvements. This allowed for me to build based on the documentation and deploy based on the project requirements as improvements have been made (still some to go 😃
I don’t think you need to defend the state at launch, especially given the open approach the team has taken to acknowledgement and improvements.
I can also confirm that using chrome-aws-lambda with puppeteer-core on firebase functions yields a significant speedup
I tested with puppeteersandbox (which is the one you have on aws lambda), and that reported me around 1000ms (endTime - startTime). A benchmark with ./curl-benchmark.py would be much nicer 😄 to look.
I will also mention, All of them were allocated 512MB ram and at most 250-280MB were used. At first it were using less ram, but then started to increase on further deployments.
Here you go, the code. I removed as many things as I could to keep it simple.
index.js
package.json
Without functions-framework
Cloud Functions
On the previous benchmark, I was using functions-framework, which is a small overhead for handling requests on port 8080.
Once again, here are the results,
The benchmark doesn’t change much even if you remove functions-framework. It gets 2 second faster. However this still does not justify the 4 second response, which is 4x time the normal aws response.
Cloud Run
I removed functions-framework and added express, which is a lower overhead. We can try vanilla js as well.
Code:
Result:
@alixaxel I’m curious as to why
chrome-aws-lambda
is giving better results; are the chrome binaries compiled differently to those that Puppeteer downloads? Does this performance increase only affect cold starts?As I mentioned, we observed that the slowness with Headless Chrome are different from traditional CPU/memory benchmarks.
I would be glad to invite you to the Alpha of serverless containers on the Cloud Function infrastructure so that you could perform more testing. Please fill in this form http://g.co/serverlesscontainers and mention “Headless Chrome” in the “use case” field. I should be able to invite you next week.
@steren AWS has the same limitation, you only get a fixed 500MB on
/tmp
regardless of how much memory you allocate to Lambda.On the other hand GCF/FCF is memory-mapped:
So even if GCF was running on HDDs and Lambda on SSDs, it still wouldn’t explain the huge discrepancies in performance we are seeing.
I’m experiencing the same but at AWS Lambda, where requests are reaching the timeout while the same requests from my local are fine and under expected time.
For Mac OS visit, chrome://version/ & see the
Executable Path
path field to get the Chromium path. It would be something like/Applications/Chromium.app/Contents/MacOS/Chromium
so, this makes it work locally on Mac
executablePath: '/Applications/Chromium.app/Contents/MacOS/Chromium'
@Robula Besides shipping with less resources, chrome-aws-lambda is a headless-only build. That by itself should already explain some gains, but if you read the discussion above, making
/tmp
it’s home should also be beneficial in GCF context. But I’m just guessing here, I don’t have any concrete data to back it up.@lpellegr
/proc/cpuinfo
is always shows 4 CPUs on GCF, andos.cpus()
always shows the 8 hyperthreads, regarless of “instance size”.A bit annoying actually since some apps will use this to decide how many threads they’ll create for a CPU intensive job, and a 128 MB function for sure won’t be allowed to tax all 8 of the host’s hyperthreads.
In situations where I launch a CPU/memory intensive sub-process, I’ve got to a point where I can’t even kill the sub-process. Then my function eventually times out, container is “suspended”, and when another request comes in, container is “reused”, old process is still running, and I still can’t kill it.
Executing the following on GCF:
Gives me:
/srv
So puppeteer and its downloaded Chromium lives in the
/srv/node_modules
. And this is not a writable location.+1 to investigate exactly where is Chrome trying to write.
That’s been my experience as well.
Capturing full page screenshots, on large viewports, at DPR > 1 is intensive. It appears to be especially bad on Linux: https://github.com/GoogleChrome/puppeteer/issues/736
I have added some probe to measure operations time with
console.time
.Here are the results for a local invocation (served by
firebase serve
):The same for an invocation on Cloud Functions:
if I compare both:
I can understand why the launch is slower on Cloud Functions, even after multiple runs since the hardware is quite different from a middle-class desktop computer. However, what about time differences for
newPage
andgoto
?