nodejs-pubsub: Google Cloud Pub/Sub triggers high latency on low messages throughput

i’m running project which publishes messages to a PubSub topic and triggers background cloud function.

I read that with high volumes of messages, it performs well, but for lesser amounts like hundreds or even tens of messages per second, Pub/Sub may yield high latencies.

Code example to publish message:

const {PubSub} = require('@google-cloud/pubsub');

 const pubSubClient = new PubSub();

 async function publishMessage() {
    const topicName = 'my-topic';
    const dataBuffer = Buffer.from(data);

    const messageId = await pubSubClient.topic(topicName).publish(dataBuffer);
    console.log(`Message ${messageId} published.`);
 }

 publishMessage().catch(console.error);

Code example of function triggered by PubSub:

exports.subscribe = async (message) => {
  const name = message.data
    ? Buffer.from(message.data, 'base64').toString()
    : 'World';

  console.log(`Hello, ${name}!`);
}

Environment details

  • OS: Windows 10
  • Node.js version: 8
  • @google-cloud/pubsub version: 1.6

The problem is when using PubSub with low throughput of messages (for example, 1 request per second) it struggles sometimes and shows incredibly high latency (up to 7-9s or more).

Is there a way or workaround to make PubSub perform well each time (50ms or less delay) even with small amount of incoming messages?

Thanks!

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 3
  • Comments: 31 (4 by maintainers)

Most upvoted comments

I’m seeing a similar issue over here. My setup is similar to the OP - Firebase cloud function triggers individual PubSub topic messages. I’m seeing latency in minutes rather than seconds.

Interestingly the topic().publish() method seems to be taking a long time based in the logs.

Ive tried setting max messages but that didn’t seem to help.

batching: {
  maxMessages: 1
}

Anything else I can try or information to share that would help debug this?

One thing that appeared to have also occurred at this time was messages published multiple times (we’re using an orderingKey):

Doesn’t that show that it’s important to choose good idempotency keys and not follow the suggestions in the documentation? Or is there something I’m missing?

@ChrisWestcottUK The release notes for the server only track significant API/region availability/feature availability changes, not every server change that is made. There is no public tracking for all of these such changes. For tracking issues if they affect your projects, it is best to put in a support request so support engineers can follow up. This particular incident should now be resolved.