azure-functions-host: ServiceBusTrigger delayed for extended period of time before executing

Check for a solution in the Azure portal

Nothing found.

Investigative information

This is a duplicated bug report of https://github.com/Azure/azure-functions-servicebus-extension/issues/116

The issue I am having is that my function with a ServiceBusTrigger goes dark for an extended period of time. In the linked bug report, there is a 12 hour gap of silence, then it caught up with all messages. Then, after a short period of normal behavior, it went silent again. I was able to force it to trigger by initiating a new deployment via Azure DevOps.

Please provide the following:

The information provided here is not referring to a problem invocation. I’m hoping you can use this information to trace into the “dark” periods.

  • Timestamp: 10/13/2020, 19:36:13
  • Invocation ID: 2fb23b2e-8037-4e18-8538-be9c3efbc0b6
  • Region: East US

The periods of silence were (all in Central time)

10/11 8:30am - 5:30pm 10/11 8:30pm - 5:30am (next day) 10/13 12:30pm - 2:15pm

Expected behavior

The ServiceBusTrigger should be firing automatically when new messages are available in the queue.

Actual behavior

The ServiceBusTrigger works for a period of time, then begins to stop triggering automatically. The first noticed period was 12 hours of silence, then it fired on it’s own and caught up with all messages in the queue. Soon after, the message count rose and the functions didn’t execute. There were no Application Insights logs to correlate this rising queue count. At the time of this writing, I just had this issue happen again, where the function didn’t execute for around an hour. Triggering a new deployment via AzureDevOps caused the function to fire.

Known workarounds

Triggering a new deployment via AzureDevOps caused the function to fire.

Related information

Source
[FunctionName("NotificationProcessor")]
public async Task Run([ServiceBusTrigger(ServiceBusNotificationsQueueName, Connection = ServiceBusConnectionName)] Message message,
    MessageReceiver messageReceiver,
    ILogger logger)
{
    try
    {
        // read the message & process. based on the results, call abandon or complete.
        // DoWork();
        if (success)
        {
            await messageReceiver.AbandonAsync(message.SystemProperties.LockToken, modifiedProperties);
        }
        else
        {
            await messageReceiver.CompleteAsync(message.SystemProperties.LockToken);
        }
    }
    catch (Exception ex)
    {
        await messageReceiver.AbandonAsync(message.SystemProperties.LockToken);
        logger.LogError(ex, $"NotificationProcessor Error processing {message.MessageId}");
        throw;
    }
}

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Comments: 21 (2 by maintainers)

Most upvoted comments

Thanks @cygnim - in my discussions with Azure support about this, we don’t believe that we’re hitting any kind of undocumented number of triggers that causes the functions to shut down - we do see a message that LOOKS like that’s what’s going on, but we’ve been assured this is not actually the cause of the issue.

It actually appears to be a kernel-level issue in the function app service.

While MS are still working on a solution, we’ve been advised that the function app team are working on improvements to service monitoring that will re-start the connection to the Service Bus in the event that the connection drops out, and processing of messages halts…

@francoishill, I can confirm that we have this issue with the Consumption plan even when using a ServiceBus connection string that has the “Manage” policy, so it’s not just an issue with using the “Listen” policy - there are other factors at play.