nodejs-datastore: Error: 8 RESOURCE_EXHAUSTED: Bandwidth exhausted

We get this error often:

Error: 8 RESOURCE_EXHAUSTED: Bandwidth exhausted
    at Object.callErrorFromStatus (/root/repo/node_modules/@grpc/grpc-js/build/src/call.js:30:26)
    at Http2CallStream.<anonymous> (/root/repo/node_modules/@grpc/grpc-js/build/src/client.js:96:33)
    at Http2CallStream.emit (events.js:215:7)
    at Http2CallStream.EventEmitter.emit (domain.js:476:20)
    at /root/repo/node_modules/@grpc/grpc-js/build/src/call-stream.js:75:22
    at processTicksAndRejections (internal/process/task_queues.js:75:11) {
  code: 8,
  details: 'Bandwidth exhausted',
  metadata: Metadata { internalRepr: Map {}, options: {} },
  note: 'Exception occurred in retry method that was not classified as transient'
}

Fully described in grpc repo: https://github.com/grpc/grpc-node/issues/1158

Cross-posting here for visibility.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 1
  • Comments: 22 (9 by maintainers)

Most upvoted comments

@bcoe do you think this is something we can improve with better logging out of GAX? As a user it seems the hard bit is this doesn’t indicate where it exhausted resources.

To unblock as a user, it is likely best to retry with exponential backoff when this happens.

@bcoe Looking back through this it seems the stack is in GAX. But this is also an older issue. I am leaning towards closing this and reopening if it comes up again?

Experiencing 8 RESOURCE_EXHAUSTED: Bandwidth exhausted errors also.

This issue is interesting: https://github.com/googleapis/nodejs-firestore/issues/765. Same error, traces back to grpc-js using Node.js HTTP/2, which has an open issue that could address this when fixed. The workaround of using grpc is the best bet for now. @murgatroid99 – does that sound right?

So, native gprc was running in production for 2 days and we see 0 errors from grpc. Neither from our cron jobs. Good for our production service, but it also proves that @grpc/grpc-js is not working as expected, the bug still exists there.