parse-server: FR: Record slow cloud functions / triggers and cloud function timeout

Is your feature request related to a problem? Please describe. Although analytics aren’t supported, it would be nice if we could track “inefficient” cloud functions and triggers, which could be displayed in the dashboard with a message on how to 🚀 functions and maximise efficiency.

Describe the solution you’d like It would be nice if we could specify a “timeout” value (e.g 5000), so if a function runs for longer than this, it’s recorded as a slow function to be optimised, and a timeout is returned to the client from the server.

It might also help to have some sort of timing trace to be provided to help work out where the bottleneck is happening, so we can easier isolate whether performance issues are Parse Server decoding / encoding etc related, or cloud code.

Describe alternatives you’ve considered Manually coding into cloud functions

I’ve worked on a conceptual PR, that tracks the timings of function called, function decoding, cloud validator, cloud function, and response encoding times. It’s not ready for merging yet, more of a concept to help understand the idea around this FR. Here is the link.. This could easily be extended to triggers as well.

Here’s what a slow function looks like:

{
  params: {},
  error: { message: 'Script timed out.', code: 141 },
  events: [
    { name: '(Parse) function called', time: 0 },
    { name: '(Parse) function decoding', time: 0 },
    { name: 'cloud validator', time: 0 },
    { name: 'cloud function', time: 4794 },
    { name: '(Parse) response encoding', time: 0 }
  ],
  context: {},
  functionName: 'cloudFunction',
  timeTaken: 4794.519228000194,
  master: false,
}

In this example, a cloud function takes nearly 5s to complete, with a Parse Server timeout of 3s (slowTracking.timeout: 3000). Judging by “events”, the bottleneck is all coming from the developers’ cloud function.

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Comments: 20 (18 by maintainers)

Most upvoted comments

Thanks for your thoughts @mtrezza. Quite the sales pitch for NewRelic 😅.

Perhaps this is a good opportunity to leverage growth in the best practises guides. If I, as someone who is rather experienced in the Parse Server field struggles with this, I can only image how difficult it is for new users.

I don’t see the problem with extending additional analytics to Parse Server

Me too but i think it could be better here to see it as a new interface of Parse Server.

  • Create a ParseMetricsController
  • Use the controller into RestWirte/RestQuery, trigger hooks (we have lot of options here)
  • Create a simple ParseDashboardMetricsAdapter (or something else) that fit into the ParseMetricsController 😃

So developers could choose to use the simple one provided by parse server or implement a new one. Database, Cache, Files, works that way, and it works well, i’m strongly in favor of keeping the clean hexagonal parse server architecture

What do you think @dblythy ? @mtrezza you can may be confirm that is the right way ?

A first simple POC could be to just add ParseMetricsController to track rest write in and out 😃

functions that take a long time to resolve would never show up in the “slow query” stats page as they were still resolving

Not sure what you are referring to, a function can only show in the slow query report after it resolved or after nodejs server timeout.

Another aspect to consider is that monitoring that covers only Cloud Code functions captures only a fraction of Parse Server performance, or not much at all, if Cloud Code functions are not used. To give a better picture of Parse Server it may make sense to cover jobs, triggers (where often a lot of magic happens), and direct endpoint calls, or at least considering the possibility of a future extension when deciding for a concept now. That may require to rethink the concept of monitoring. A solution that is conceptually restricted to Cloud Code functions may not be viable.