firebase-android-sdk: Firestore query failed with code=RESOURCE_EXHAUSTED, description=Quota exceeded.

Describe your environment

  • Android Studio version: Arctic Fox 2020.3.1 Patch 3
  • Firebase Component: Firestore
  • Component version:
implementation platform('com.google.firebase:firebase-bom:28.4.1')
implementation 'com.google.firebase:firebase-firestore-ktx'

[REQUIRED] Step 3: Describe the problem

My app queries from Firestore, and when the result contains more than 2000 items, the query always fails with error WatchStream]: (c98c54b) Stream closed with status: Status{code=RESOURCE_EXHAUSTED, description=Quota exceeded., cause=null}. The detail error stacktrace can be downloaded here.

Note:

  • This problem only happens on Android SDK. I am able to query on iOS and Web without a problem.
  • This problem happens when each row contains quite significant amount of data. In my test, the final result contains around 5000 items, total size is about 120MB
  • I am on Blaze plan
  • I have waited for more than 30 minutes before executing a new query

Steps to reproduce:

Simply execute a query. The query will fail for user with a lot of data.

Relevant Code:

fun getAll() {
    val ref = FirebaseFirestore.getInstance()
        .collection("users")
        .document(userId)
        .collection("notes")
        .limit(2500)

    // This query always failed with the limit >= 2500
    ref.get(Source.SERVER)
        .addOnSuccessListener { results ->
            Log.d("Test","results size" + results.size())
        }.addOnFailureListener {
            it.printStackTrace()
            Log.d("Test", "exception ${it.localizedMessage}")
        }
}

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 48 (7 by maintainers)

Most upvoted comments

@wu-hui I ended up disable the cache instead of increasing the cache size, and this issue does not happen anymore. I think this can be closed, those who see this problem can just disable the cache.

@thomasdao Yes, after looking at Francisco’s report + the absence of backend logs, it’s clear that the error is thrown by SQLite after the cache is full. To workaround this, you can either clear the cache directory with Android utility methods, or increase the size of the cache via Firestore settings.

Internally, we’ll look at improving documentation around this error case and improving the error messages. I’m going to close the issue, but feel free to ask any other questions you may have!

We have encountered the same issue.

For us, it’s being reproduced most of the time, when we run 5 concurrent queries expecting 1000-1500 documents in each. We use Firestore Javascript SDK to integrate on a webapp.

See
Screenshot 2023-06-23 at 9 35 55 AM and Video for reference.

Please, note that it doesn’t happen if we run a single query to retrieve 10K documents.

Other observations:

  1. Once this error appears, snapshot handler is invoked with 0 docs.
  2. And, Firestore Javascript Library seems to be performing an internal retry automatically, but after delay of a minute or more; and in most cases it also fails.
  3. It may succeed after 2-3 such retry attempts, each with a delay of 1 to 2 minutes. So, the query response is retrieved after ~10 minutes.

We are on the Blaze Plan. I didn’t find any documentation about this error: Why it appears? What should be standard practices to avoid this?

Sorry! for the delay in the response.

Cause

There is some internal issue on Firestore’s architecture. It seems that Data Pipeline for Query (on the server-side) has some kind of limit on Response Size. Whenever the Query response is larger than X MB (probably just 1 MB) then this issue is reproduced.

For us, this was reproduced even with 1 or just 2 queries. So, it’s not about sending too much queries in parallel. We are sure that it’s relevant only to the Query Result.

If not relevant to the Query Size, it might be relevant to the Query Executing Time. Whenever a large query takes longer than X seconds on the server, then pipeline somewhere gets timeout internally, and Firestore client gets RESOURCE_EXHAUSTED response.

Fix

Fix we applied was to avoid triggering large query.

It required us to apply many updates in our UI App’s architecture. Our fixes were as follows:

  1. We split large queries into multiple smaller queries. e.g. We were loading all the Cards on a Kerika Board through a Single Query. And as a result, it was failing for the larger boards. From Data characteristic point of view, even larger boards won’t have more cards in columns other than DONE and TRASH. So, We split this Query into 3 different queries: a. To Load cards of all columns other than DONE and TRASh. b. One-One query per Done & Trash column.

  2. Apply Pagination: Done & Trash column can have 1000 cards, but user mostly need to see latest 20-25. So, initially, we load only 100 cards. The remaining cards are loaded only when user scrolls down into those columns.

  3. Reduce Result Set: Kerika Board needs to show Unread Highlights on the Cards. We were managing 1 document per Card per User. This document was being managed for the Cards even where User has read everything. So, this Query was relatively larger, as it was loading extra documents. We added a field to this document anyUnread (Boolean); and this Query was updated to include only documents where anyUnread=true. So, Query’s result-set was reduced drastically.

This was just an example. Similar, updates we needed to apply at many other places.

It was really a big pain-point. As our application was completely developed & we were in final testing phase, we had no option other than fixing this from our-side. If we would have found this limitation in early architecture phase, we might not have used Firestore.

More disappointing here is that - we don’t get proper response (not even acknowledgement) of the issue from FireStore’s Team.

@gilamran I hope this would help you and others.

Hi! We are facing similar issues since some days and are in contact with the support but no real solution available yet.

Disabling the cache isn’t a fix. Please reopen

@bacarPereira that’s strange, I thought 20,000 writes per day is only the free quota and not for the Blaze Plan?

That’s right. I meant spark plan. I will edit. Thanks.

@thebrianchen can you reopen this issue? This is serious bug and I don’t understand why you are so eager to close it without even investigating the issue