azure-sdk-for-net: [BUG] Lease already present after upgrading to Microsoft.Azure.WebJobs.Extensions.Storage 5.0.0-beta.3

Describe the bug

2021-06-02T12:43:49.889 [Warning] Error response [a89392d0-8a3c-44ba-b55e-8c56563e74ec] 409 There is already a lease present. (00.0s)Server:Windows-Azure-Blob/1.0,Microsoft-HTTPAPI/2.0x-ms-request-id:5a92e3cb-801e-002d-1eac-57e3e5000000x-ms-client-request-id:a89392d0-8a3c-44ba-b55e-8c56563e74ecx-ms-version:2020-04-08x-ms-error-code:LeaseAlreadyPresentDate:Wed, 02 Jun 2021 12:43:49 GMTContent-Length:221Content-Type:application/xml

Expected behavior

No warning

Actual behavior (include Exception or Stack Trace)

To Reproduce

  1. update to 5.0.0-beta.3

Environment:

  • azure functions v3
  • dotnet core 3.1
  • Microsoft.Azure.WebJobs.Extensions.Storage 5.0.0-beta.3

All project dependencies:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netcoreapp3.1</TargetFramework>
    <AzureFunctionsVersion>V3</AzureFunctionsVersion>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Azure.Identity" Version="1.4.0" />
    <PackageReference Include="Azure.Messaging.ServiceBus" Version="7.2.0-beta.3" />
    <PackageReference Include="Azure.Security.KeyVault.Secrets" Version="4.1.1" />
    <PackageReference Include="Azure.Storage.Blobs" Version="12.9.0-beta.4" />
    <PackageReference Include="Microsoft.Azure.Cosmos" Version="3.19.0" />
    <PackageReference Include="Microsoft.Azure.Devices" Version="1.34.0" />
    <PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.1.0" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.DurableTask" Version="2.4.3" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.Storage.Blobs" Version="5.0.0-beta.3" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.ServiceBus" Version="5.0.0-beta.3" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Extensions" Version="4.0.1" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Logging.ApplicationInsights" Version="3.0.27" />
    <PackageReference Include="Microsoft.Azure.WebJobs" Version="3.0.27" />
    <PackageReference Include="Microsoft.Extensions.DependencyInjection.Abstractions" Version="3.1.11" />
    <PackageReference Include="Microsoft.Extensions.Logging" Version="3.1.11" />
    <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.12" />
    <PackageReference Include="Microsoft.ApplicationInsights" Version="2.17.0" />
  </ItemGroup>
  <ItemGroup>
    <None Update="host.json">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    </None>
    <None Update="local.settings.json">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
      <CopyToPublishDirectory>Never</CopyToPublishDirectory>
    </None>
  </ItemGroup>
</Project>

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 7
  • Comments: 21 (8 by maintainers)

Most upvoted comments

I’d like to add and raise awareness why these issues should be taken quite seriously by product group.

The impact is two-fold:

  • Noisy logging that is driving up the application insights costs due to low value trace messages. Messages such as “Lease already present”, "Skipping ownership lease aquiring for somefunctionapp-control-… ". Yes, your customers are paying for these messages.
  • Some of these behaviors (e.g. blobtrigger scanning change between v3 and v4) cause unnecessary load on storage account which is harder to notice due to infinite cloud scale, until you notice the cost rise due to storage account usage (which then also causes Azure Defender Costs to rise).

I’ve been hit by this twice now - once for non-durable functions with blob triggers that went on spiral trying to understand if retriggering is necessary and causes both insane blob storage usage growth and trace logging. This happened while migration from v3 to v4 and to .net 6. As mentioned here: “Azure.Core”: “Error” helps with less trace logs, but to “fix” the storage usage, we had to move away from blob triggers to eventgrid + servicebus triggering instead. On one of the most used storage account, the problem caused 8 million additional transactions on storage per day for us. Add that we have about 10 environments and more than one storage account per environment - its becomes a noticeable cost and mess.

The second one is the durable function and the “Skipping ownership lease aquiring for somefunctionapp-control-”. Its not merely an info message, your storage is being hit here as well (at least mine is). About 1.2 million transactions/day on the most used storage account linked to function app. I’ve yet to understand what is causing this, since the particular durable function for us is not in active usage - but it started after .net 6 / v4 migration.

In case it helps,

I have one Consumption Function App with 3 functions: (simplified)

public Task<ActionResult> Post(
      [HttpTrigger(AuthorizationLevel.Anonymous, nameof(HttpMethods.Post))] HttpRequest httpRequest,
      [Blob("%DocumentsUploadContainer%/{rand-guid}", FileAccess.ReadWrite)] BlobClient blobClient,
      [Queue("%DocumentsUploadQueue%")] IAsyncCollector<DocumentMetadata> documentUploadMessages,
      CancellationToken cancellationToken) {...}
public Task ProcessScheduledDocumentUpload(
      [QueueTrigger("%DocumentsUploadQueue%")] DocumentMetadata metadata, string id, string popReceipt, long dequeueCount,
      [Blob("%DocumentsUploadContainer%/{blobName}", FileAccess.ReadWrite)] BlobClient blobClient,
      [Queue("%DocumentsUploadQueue%")] QueueClient queueClient) {...}
public Task CleanUpSessions(
      [TimerTrigger("%OnBaseSessionsMonitorSchedule%")] TimerInfo timer)

This function app has a dedicated storage account to which all functions refer to through the AzureWebJobsStorage App Settings variable.

This warning log is being generated ~70K times a day. This not only impacts the $$$ and the adaptive sampling features of my App Insight component, but since they are warning, they make it pretty hard to review the logs.

I’m hitting this as well, here in the first week of Jan

@kasobol-msft and @jaschrep-msft are you looking at this ? Many customers are facing this.

Just updated to the new .NET 6.0 with latest (non-prerelease) libraries and ~4 runtime (in-proc C# library), and this issue still occurs.

If you want to stem the AI log spam, setting Azure.Core to Error in your host.json seems to stop it:

{
  "version": "2.0",
  "logging": {
    "logLevel": {
      "default": "Information",
      "Azure.Core": "Error"
    }
  }
}

This unfortunately turns off all of the Azure.Core logs, most particularly ResponseDelay which warns of slow function times.

For in-process function apps at least, there’s a name deduplication that happens so that ResponseDelay is reported with category Azure.Core.1 instead of Azure.Core. So you might be able to include that one in your logs by adding "Azure.Core.": "Warning" to your host.json logging configuration. I haven’t tried it since I have my own warning log message when a function takes too long.

This is still happening in the 5.0 release.

This bug is a spam cannon for my logs