firebase-admin-node: Firestore Realtime listener drops a connection and does not reconnect
- Operating System version: Nodejs (Alpine based) docker container, node:8.7.0-alpine
- Firebase SDK version: 5.4.2
- Library version: @google-cloud/firestore 0.8.2
- Firebase Product: Firestore
We have a simple Firestore Realtime listener running on Docker container at Google Container Engine. Few times, listener for some reason have lost a connection and does not reconnect. Updates (sets, updates and deletes) does work but we don’t get notifications of updates, until we restart a container.
Error message we have got couple of times during last week:
Error: Error: Endpoint read failed
at sendError (/usr/src/app/node_modules/@google-cloud/firestore/src/watch.js:254:15)
at maybeReopenStream (/usr/src/app/node_modules/@google-cloud/firestore/src/watch.js:268:9)
at BunWrapper.currentStream.on.err (/usr/src/app/node_modules/@google-cloud/firestore/src/watch.js:301:13)
at emitOne (events.js:120:20)
at BunWrapper.emit (events.js:210:7)
at StreamProxy.<anonymous> (/usr/src/app/node_modules/bun/lib/bun.js:31:21)
at emitOne (events.js:120:20)
at StreamProxy.emit (events.js:210:7)
at ClientDuplexStream.<anonymous> (/usr/src/app/node_modules/google-gax/lib/streaming.js:130:17)
at emitOne (events.js:115:13)
Once (first time) we have got this:
Error: Error: Transport closed
at sendError (/usr/src/app/node_modules/@google-cloud/firestore/src/watch.js:254:15)
at maybeReopenStream (/usr/src/app/node_modules/@google-cloud/firestore/src/watch.js:268:9)
at BunWrapper.currentStream.on.err (/usr/src/app/node_modules/@google-cloud/firestore/src/watch.js:301:13)
at emitOne (events.js:120:20)
at BunWrapper.emit (events.js:210:7)
at StreamProxy.<anonymous> (/usr/src/app/node_modules/bun/lib/bun.js:31:21)
at emitOne (events.js:120:20)
at StreamProxy.emit (events.js:210:7)
at ClientDuplexStream.<anonymous> (/usr/src/app/node_modules/google-gax/lib/streaming.js:130:17)
at emitOne (events.js:115:13)
Our code is basically like this:
var admin = require("firebase-admin");
var serviceAccount = require("./credentials.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://xxxxxxxx.firebaseio.com"
});
var db = admin.firestore();
db.collection("demodata").onSnapshot(querySnapshot => {
querySnapshot.forEach(doc => {
//.. doing stuff with data
console.log(doc.data());
});
});
Is listener planned to survive on these situations, or should be build some kind retry system by ourselves?
Thanks!
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 17 (6 by maintainers)
These log statements are very helpful. Our retry logic considers these streams as healthy once it is able to send out a network package (“Marking stream as healthy”). Unfortunately, just because the TCP layer accepts our packages, doesn’t necessarily indicate that the outbound network link is active. We may have to retry more aggressively. I will kick off an internal discussion.
We are targeting a release next week.