firebase-admin-node: Deadline Exceeded error!

[REQUIRED] Step 2: Describe your environment

  • Operating System version: Ubuntu 16
  • Firebase SDK version: v6
  • Firebase Product: Firestore

[REQUIRED] Step 3: Describe the problem

I am doing a basic Add to Firestore through Node Admin SDK. I just add a object to a document and then wait for the document ID to return after that I send it to the front end to be subscribed for realtime updates. But sometimes I receive the error while adding a document and also updating as to Deadline Exceeded

Steps to reproduce:

The code that I have added to my backend.

let ref = db.collection("orders");

return new Promise((resolve, reject) => {
            ref.add(order).then((order) => {
                logger.info(order);
                resolve(order);
            }).catch((err) => {
                logger.info(err);
                reject(err);
            })
})

There you go I receive errors at catch.

Also checked my account I don’t see any high usage since it is in development phase so hardly adding 10-20 docs on a normal day.

Can you please help here. Thanks

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 7
  • Comments: 86 (18 by maintainers)

Most upvoted comments

We are in 2020 and I’m still getting the same error. I’m transfering a lot of data to cloud firestore. It’s working at the beginning, then i start having some DEADLINE Exceeded error on some items that are being added

Same here August of 2020 and I’m getting the DEADLINE_EXCEEDED error.

We currently configure a 60 second deadline for the RunQuery call (which both Query.get() and Query.stream() use). We believe that this should fit most use cases, and would like to refrain from increasing this deadline. Having a shorter deadline limits the amount of work that the backend does for clients that are no longer interested in a Querie’s result, which generally results in fewer billed reads.

While somewhat convoluted, you can configure a different deadline for your client:

 const firestore = new Firestore({
      clientConfig: {
        interfaces: {
          'google.firestore.v1.Firestore': {
              methods: {
                RunQuery: {
                  timeout_millis: YOUR_NUMBER_HERE
                }
              }
            }
          }
        }
      });

Stopped using firebase a while ago and this bug is one the main reasons (there were a few others). Had a really rough week in production where we had to re-architect a bunch of backend stuff as quickly as possible due to this issue and it’s still not resolved.

Very interesting product, otherwise.

@pkwiatkowski0 , I made this error go away.

After a lot of testing and educated guesses on the code developed, here is what made the fix go away -

When the CF is triggered, we were calling a few async calls to perform some business operations. In one of the asynchronous call we had to fetch a few hundred records and that’s the place where we were always getting DEADLINE_EXCEEDED errors. After having some console.time() around the documents fetch it was seen that block of code was taking about 100s of seconds to fetch a mere 200 records from the collection. Though the function execution time in total was <100ms. Strange. Strange. Strange…

We made the calls synchronous by having async/await and waited for the entire execution to complete before exiting the function. This greatly reduced the time it took to fetch 200 records from 100s of seconds to <1000ms. Post this we have not seen the DEADLINE_EXCEEDED anymore and code is stable. Hope this helps.

I guess the fire-and-forget nature of the code blocks in the CF puts the entire function on some throttle mode and degrades the performance.

So why is this happening? what is the limit that is being exceeded here? It is on blaze plan so this should not happen. Instead firebase should let us query as much as we want… Kindly specify the technical issue that needs to be overcome to resolve this.

@CapitanRedBeard & @farhankhwaja while I don’t know if this will help you, still I was running into the same Deadline exceeded errors with only less than 2k of records! I have implemented this into all of my cloud functions, hopefully, this will help you too:

/**
* Set timeout and memory allocation
* In some cases, your functions may have special requirements for a long timeout value or a large * allocation of memory.
*You can set these values either in the Google Cloud Console or in the function source code (Firebase only).
*/
const runtimeOpts = {
  timeoutSeconds: 540, // 9 minutes
  memory: '2GB'
}
/**
 * When a csv file is uploaded in the Storage bucket it is parsed automatically using
 * PapaParse.
 */
exports.covertToJSON = functions
  .runWith(runtimeOpts) // <== apply timeout and memory limit here!
  .storage
  .object()
  .onFinalize( object => {
[...]

Sadly I can’t remember where I found that snippet but as I said, every little bit helps!

Jon

My workaround is to find a size that doesn’t hit the DEADLINE ERROR, like 10k, then just loop by 10k increments and read for the database until you have read all your docs.

You can do something like this:

  const FOO_READ_LIMIT = 10000
  const foosRef = firestore.collection('foo');

  try {
    let isCalculating = true;
    let lastDoc = null;

    async.whilst(() => {
      return isCalculating;
    },
    async (next) => {
      const fooQuery = foosRef.limit(FOO_READ_LIMIT)
      const pagedFooQuery = 
        lastDoc 
          ? pagedFooQuery.startAfter(lastDoc)
          : pagedFooQuery

      const fooSnapshots = await pagedFooQuery.get();
      const size = fooSnapshots && fooSnapshots.size;

      if (size < FOO_READ_LIMIT) {
        isCalculating = false;
        next()
      }
      
      await doStuff(fooSnapshots);

      lastDoc = fooSnapshots.docs[FOO_READ_LIMIT - 1]
      next()
    },
    (error) => {
      console.log('All data is read');
    });
  } catch(e) {
	// Something wen wrong
  }

July 2023 same issue

We are preparing a release of @google-cloud/firestore v2.3.0 that will increase the timeout for all of our operations to 60s. This applies to all operations issued by the SDK, and is independent of the timeout set by any runtime environment (such as GCF). We hope that this alleviates some of the issues you are encountering.

https://cloud.google.com/nodejs/docs/reference/firestore/0.8.x/Firestore#.setLogFunction

When using the Admin SDK:

admin.firestore.setLogFunction((log) => {
  console.log(log);
});

SAME ISSUE TODAY

I had the same error where I only was only reading and writing small amounts of records (not even a few hundred) per firestore get()/set(). I discovered for me the issue was I had too many operations happening in parallel as i was performing these operations nested within a for loop. Increasing timeout and memory did not help.

To resolve the error (similar to @ashking resolution) I updated my code to handle the firestore database operation callbacks using async/wait. I effectively removed any parallel firestore operations. This is working really well and no more errors, and things happen quickly which is good as I was concerned I would then hit the timeout limit if things happens one at a time. But seems faster if anything.

Hope this helps anyone stuck on a similar issue.

I would like to clarify something that I personally wasn’t aware of - increasing the request timeouts well past 60 seconds actually will not have the desired effect. 60 seconds is currently the maximum timeout supported by the backed, so the only other solution is to reduce your response size.

We are tracking this internally via bug ID 139021175.

OK, if this does come back, let us know.