azure-functions-durable-extension: Function App instance with a single Durable Entity in it produced tons of 'EventRaised' records in its *History table
My test instance of this code went into a weird state.
There’s one single SftpToStorageQueueWatcherEntity
entity in it, that entity is periodically signalled by a TimerTrigger.
According to logs, the TimerTrigger always succeeds (aka successfully sends a signal to the entity). But the entity doesn’t get that signal, according to logs. And its state isn’t updated.
And the underlying *History table now contains > 300K records that look like this:
@sftptostoragequeuewatcherentity@bfkrmsiwbkubo.northeurope.azurecontainer.io-.txt,0000000000050088,2021-05-06T22:08:09.076Z,-1,Edm.Int32,EventRaised,Edm.String,1552010a9ddc4f02954f3b3be4802126,Edm.String,false,Edm.Boolean,2021-05-06T20:47:25.005Z,Edm.DateTime,“{”“op”“:”“Watch”“,”“signal”“:true,”“input”“:”"""bfkrmsiwbkubo.northeurope.azurecontainer.io/.txt"“”“,”“id”“:”“0226d4b7-9ec2-41e1-aab9-9bea19b70326"”,““Timestamp””:““0001-01-01T00:00:00"”,”“Predecessor”“:”“0001-01-01T00:00:00"”}",Edm.String,op,Edm.String,
, which keep coming.
I can also see lots of weird logs like this in the AppInsights ‘traces’ table:
@sftptostoragequeuewatcherentity@bfkrmsiwbkubo.northeurope.azurecontainer.io-*.txt: Function ‘sftptostoragequeuewatcherentity (Entity)’ completed ‘Watch’ operation 8a49fc8c-ef15-42ec-a598-5350ed7fab0a in 2060.9566ms. IsReplay: False. Input: (1612 bytes). Output: (null). HubName: sftpw2mkeaxozmpyvmfunction. AppName: sftpw2mkeaxozmpyvm-function. SlotName: Production. ExtensionVersion: 2.4.0. SequenceNumber: 393.
@sftptostoragequeuewatcherentity@bfkrmsiwbkubo.northeurope.azurecontainer.io-*.txt: Function ‘sftptostoragequeuewatcherentity (Entity)’ completed ‘Watch’ operation 427377a5-a6db-4ccc-b23a-b6ac21e094d9 in 1512.2289ms. IsReplay: False. Input: (1612 bytes). Output: (null). HubName: sftpw2mkeaxozmpyvmfunction. AppName: sftpw2mkeaxozmpyvm-function. SlotName: Production. ExtensionVersion: 2.4.0. SequenceNumber: 394.
@sftptostoragequeuewatcherentity@bfkrmsiwbkubo.northeurope.azurecontainer.io-*.txt: Function ‘sftptostoragequeuewatcherentity (Entity)’ completed ‘Watch’ operation f1951c7b-bfa6-4ec4-9841-8aefb8c1798a in 1407.3627ms. IsReplay: False. Input: (1612 bytes). Output: (null). HubName: sftpw2mkeaxozmpyvmfunction. AppName: sftpw2mkeaxozmpyvm-function. SlotName: Production. ExtensionVersion: 2.4.0. SequenceNumber: 395.
I have another, healthy instance running (separate Storage) - and it doesn’t do anything like that (its *History table is stable and only contains 8 records, the entity works as expected and produces expected logs).
Why is all of this and how do I prevent this from happening?
Expected behavior
The entity successfully receives its signals and processes them.
Actual behavior
The entity doesn’t get invoked, and the *History table constantly inflates in size.
Relevant source code snippets
https://github.com/scale-tone/sftp-watcher
App Details
- Durable Functions extension version: 2.4.0
- Azure Functions runtime version: 3.0
- Programming language used: C#
Screenshots
If deployed to Azure
- Timeframe issue observed: still observed
- Function App name: sftpw2mkeaxozmpyvm-function
- Azure region: North Europe
- Orchestration instance ID(s): @sftptostoragequeuewatcherentity@bfkrmsiwbkubo.northeurope.azurecontainer.io-*.txt
- Azure storage account name: sftpw2mkeaxozmpyvmst
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 15 (2 by maintainers)
FYI, some progress: I can repro this on my local machine now. Next step is to figure out what exactly is happening and how to fix it.