azure-functions-host: Container is disposed and should not be used: Container is disposed.
I just found the following exception in the log for an Azure Function:
Container is disposed and should not be used: Container is disposed. You may include Dispose stack-trace into the message via: container.With(rules => rules.WithCaptureContainerDisposeStackTrace())
It’s the first and only time I see this exception, so don’t know how critical it is. But still, think you may want to know about it.
I’m using the following versions:
<PackageReference Include="Microsoft.Azure.WebJobs.Extensions.ServiceBus" Version="3.0.5" />
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.29" />
<PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.0.0" />
The function is triggered by events on an Azure Service Bus topic. DI is configured using a Startup.cs
file:
[assembly: FunctionsStartup(typeof(Blabla.Startup))]
namespace Blabla
{
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
var config = new ConfigurationBuilder()
.AddJsonFile("local.settings.json", optional: true, reloadOnChange: true)
.AddEnvironmentVariables()
.Build();
...
builder.Services.AddSingleton(config);
}
}
}
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 14
- Comments: 73 (8 by maintainers)
Commits related to this issue
- fix: ATLAS-735: Add retry logic to all activity functions - Current settings = start at 5 second wait, and double it each subsequent failure, 5 times - This should help with transient failures such a... — committed to Anthony-Nolan/Atlas by benbelow 4 years ago
- fix: ATLAS-735: Add retry logic to all activity functions - Current settings = start at 5 second wait, and double it each subsequent failure, 5 times - This should help with transient failures such a... — committed to Anthony-Nolan/Atlas by benbelow 4 years ago
Any update on this issue. We are also facing the same problem in the PROD environment.
@ChrisHimsworth I am running into the same issue. I have spoke with MS and it has to do with the AZF either switching instances or shutting down in the middle of your execution. The scale controller determines if this should be done or not. Only MS can see the logs of your processID switching HostInstanceIds so you will need their help to confirm you’re hitting the same reason why.
I haven’t tried it yet but the quick recommendation/solution is to switch to the Premium Plan because MS recently introduced a “drain mode” on it. Drain mode stops sending requests to the AZF and lets it’s current executions complete/timeout. This would allow a true graceful shut down of the AZF. Ideally this would have been there all along and on Consumption plan, but it is what it is. It’s hard to tell the health of your application because this error actually spawns other types of exceptions around IOC.
A little off topic but a few things that led up to this that I had to fix that might be of some value.
[FunctionName(“MyFunctionApp”)] public async Task MyFunctionApp([ServiceBusTrigger(“MyTopic”, sub, Connection = busConn)]Message message, ILogger log, CancellationToken token) { if (!token.IsCancellationRequested) { // await do work } }
[assembly: FunctionsStartup(typeof(MyNameSpace.Startup))] namespace MyNameSpace { public class Startup : FunctionsStartup { public override void Configure(IFunctionsHostBuilder builder) { //var config = builder.GetContext().Configuration; <===get access to the config so you can use it for your ioc //setup IOC } public override void ConfigureAppConfiguration(IFunctionsConfigurationBuilder builder) { //setup configuration and KV //https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection#customizing-configuration-sources } } }
Like I mentioned the above list is what I did to get to widdle down exception to just the DryIOC/container disposed of exception. I am not trying to confuse anyone but wanted to show it.
I’ll let you know if I find anything else
Would be nice with a confirmation about this issue from someone on the Functions team.
Just an update – I’ve started to investigate this more deeply and am working on a repro to see what we can do here to either prevent this or, at least, improve the exception so we have a better understanding of the cause.
Ultimately it’s because the host is already disposed (it’s in the process of shutting down) and something is attempting to create a new scope. Sometimes this is a race condition in a trigger – sometimes it’s in function code – but just judging from the current callstacks we can’t quite determine the culprits.
Is there any update on this? We have tried upgrading to version 4.2.2 of Microsoft.Azure.WebJobs.Extensions.ServiceBus, but we still get issues. Every time it happens we get two exceptions logged. Both contain the same error message:
1st exception:
2nd exception:
I too am getting this issue with ServiceBusTrigger.
I only get this when using Microsoft.Azure.WebJobs.Extensions.ServiceBus 4.2.1 (or 4.2.0). The issue is not present when running with 4.1.0, 4.1.1 or 4.1.2. I note 4.2.0 contained Support for message draining before the Functions host shuts down.
My package references are:
These errors always coincide with JobHost restarts. Here are some Kudo function host log extracts that hopefully provided more context for the exceptions:
I don’t profess to understand all this but it seems to me that the DI Container has been disposed before all messages have drained?
If it’s helpful here are the reported function startup options:
Would be grateful for any update on this.
@brettsam - Is this being looked into? A lot of people are experiencing the issue outside of the original Azure Durable Functions sphere and example raised (see above) and the issue is causing data to be dropped?
@brettsam has there been any more movement on this issue? We are seeing this exception a few times a day, resulting deadletters on our queues.
Just as info, I experienced this with the Azure ServiceBus bindings and not EventHub 👍
Still seeing these today. Using .Net 5 isolated.
It seems as if the functions team have abandoned GitHub issues recently there are a few issues that have been open for a long time without any input or progress being updated. It’s very frustrating from the communities perspective as we don’t know if you’re working on something, if it’s blocked by something else, if you’ve forgotten about it or anything 😕
Love the platform and the direction it’s going I just think you’d be better placed if you kept the community up to date with what’s going on 😃
Our workaround was to make all our functions idempotent and increased the maxDeliveryCount on all our queues to allow multiple retries. This caters for the the DryIoc exception that occurs before the trigger has a chance to execute.
As a side note we also added
CustomMessagingProvider
to handle in-trigger exceptions. See here for an example - https://github.com/Azure/azure-functions-servicebus-extension/blob/dev/test/Microsoft.Azure.WebJobs.Extensions.ServiceBus.Tests/ServiceBusEndToEndTests.cs#L933We also ended up adding exponential retries with something similar to this implementation https://github.com/Azure/azure-functions-host/issues/2192#issuecomment-465891229
Actually the “job queue” here is a db table for holding jobs to be done. The message from the service bus just give me a GUID that referes to the item in the db table
I had this issue, but the it’s gone now, I think the issue you are facing are in fact not an error but a missunderstanding…
The constructor of the class
this is what the body of my message handler looks like.
After doing this way the exception disappeared
Having trouble with this issue as well. Is there any progress being made?
@gorillapower same here. i never saw this issue until we recently upgraded to the latest service bus extension package so it sounds like a regression. i have an open ticket with MS but it’s not going anywhere at the moment.
@ChrisHimsworth what version of the trigger are you using? The extension has recently been updated to address issues with termination where it would attempt to use services or perform function invocations after a shutdown request, which would trigger service resolution outside or their lifetime scope.
Has there been any traction on this issue? I’m facing the same problem and was wondering if anyone has figured out the root cause.
A quick comment. Happy to hear that you resolved the issue by removing the Telemetry Client. I don’t have that installed, why the error is definitely still there. Could be another feature causing a similar problem, of course.
I’ve opened an issue and created a PR to fix this temporarily, until the EventHub team chooses what to do.
The PR basically doesn’t progress the checkpointer (thus retrying to process the failed event) when internal exceptions such as this occur. It works fine and we’re using it in production.
Issue: https://github.com/Azure/azure-functions-eventhubs-extension/issues/66 PR: https://github.com/xzuttz/azure-functions-eventhubs-extension/pull/1 MyGet package: https://www.myget.org/feed/xzuttz/package/nuget/Microsoft.Azure.WebJobs.Extensions.EventHubs
I have the same problem while using azure function service bus trigger. it’s not happening permanently. it happens once or twice a day and leads to deadlettering my messages.
It looks like 2.2.1 has some of the fixes surrounding this. https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.2.1 – specifically this item:
Can you move to that and see if the problem persists?