azure-functions-host: AI: Local storage access has resulted in an error (User: ) (CustomFolder: ).

Using Python for Azure Function with Application Insights Integration enabled to log trace messages. Not all logs were written to application insights, I’ve also found an error message in application insights.

AI: Local storage access has resulted in an error (User: ) (CustomFolder: ). If you want Application Insights SDK to store telemetry locally on disk in case of transient network issues please give the process access to %LOCALAPPDATA% or %TEMP% folder. If application is running in non-windows platform, create StorageFolder yourself, and set ServerTelemetryChannel.StorageFolder to the custom folder name. After you gave access to the folder you need to restart the process. Currently monitoring will continue but if telemetry cannot be sent it will be dropped. Error message

Investigative information

Please provide the following:

  • Timestamp: 2019-04-04T00:48:49.189 (UTC)
  • Function App version (1.0 or 2.0): 2.0
  • Function App name: michi-hactl-func
  • Function name(s) (as appropriate): StagingHandler or DevHandler
  • Invocation ID: N/A
  • Region: East Asia

Repro steps

Provide the steps required to reproduce the problem:

Sample codes

# ...
import logging
# ...
logging.info("some logs...")
  1. Deploy to Function Apps
  2. Enable Application Insights integration
  3. Start Function
  4. Check logs in Application Insights
  5. As we are using Python, there is no way to set StorageFolder in codes.

Expected behavior

Should have all trace messages in application insights

Actual behavior

Some trace messages are missing, and an error indicates required folder were not created/no permission to access

Known workarounds

N/A

Related information

Provide any related information

  • Programming language used : Python
  • Links to source : Please PM if required
  • Bindings used : Blob Trigger, Service Bus Queue Trigger, Service Bus Queue binding

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 2
  • Comments: 28 (13 by maintainers)

Most upvoted comments

I am also seeing this in a C# Function running on a Linux Consumption Plan.

@brettsam Hey Brett – do you happen to know when this fix will be published to the set of Microsoft.Azure.WebJob packages on NuGet? We’re looking to pick up this fix in the 3.0.26 version of Microsoft.Azure.WebJobs.Logging.ApplicationInsights as soon as it’s released ☺️

This appears to have wilted without an owner. I’m going to assign to myself for next sprint.

For linux dedicated, it should be easy enough once we discuss it with the Linux team. For linux consumption, I will have to look more, I will get back to you on this.

I’ll assign this to our next sprint so we can find an owner. We’ll need to take these steps: https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core#if-i-run-my-application-in-linux-are-all-features-supported.

@divyagandhii – is there a write-able temp storage location we can use on the Linux workers (for both consumption and dedicated)? If so, we can create a folder there and point to it.

@lmolkova – do you know what folder the ServerTelemetryChannel writes to by default in Windows? Ideally we’d mimic the naming for Linux.

So this happens in a C# function also. I am running a BLOB trigger function in docker with app insights enabled. I get the error below. The function then terminates at which point the container restarts.

AI: Local storage access has resulted in an error (User: ) (CustomFolder: ). If you want Application Insights SDK to store telemetry locally on disk in case of transient network issues please give the process access to %LOCALAPPDATA% or %TEMP% folder. If application is running in non-windows platform, create StorageFolder yourself, and set ServerTelemetryChannel.StorageFolder to the custom folder name. After you gave access to the folder you need to restart the process. Currently monitoring will continue but if telemetry cannot be sent it will be dropped. Error message: .

I am also seeing this issue. Any ideas how to fix this for python? There is mention of storageFolder here - https://docs.microsoft.com/en-us/azure/azure-monitor/app/data-retention-privacy - but NO mention of how to implement via Python…

Also because of this error my timerTrigger function no longer runs on schedule. It was running fine for days and then this error appear and my function no longer runs on the cron schedule.

@lmolkova I seem to be having the same issue with my Azure Python Function App. I have followed the link to the FAQ you provided and see that I need to configure a local folder for the telemetry channel, however I am unsure how I would convert the C# example into Python. Were you able to do this in Python?

what is that folder? Is this something you can create before using or should the platform provide it?

Hi - I’m getting the same error in a C# Azure Durable function. Not sure whether I can tag along in this issue or whether you want me to create a new one.

AI: Local storage access has resulted in an error (User: ) (CustomFolder: ).  ... etc
--

For context - I’m using Azure Durable functions to work through 1 million items. I see approx. 70 K function executions per minute and then after 2 minutes things stop working and this is the last trace message I find in App Insights. Am I getting this error because the local buffer of AppInsights messages is full or stops responding because of the number of telemetry messages that are generated? If that is so - I can probably do something with tweaking the host.json file and modifying these:

  "logging": {
        "fileLoggingMode": "debugOnly",
        "logLevel": {
          "Function.MyFunction": "Information",
          "default": "None"
        },
        "applicationInsights": {
            "samplingSettings": {
              "isEnabled": true,
              "maxTelemetryItemsPerSecond" : 20,
              "evaluationInterval": "01:00:00",
              "initialSamplingPercentage": 100.0, 
              "samplingPercentageIncreaseTimeout" : "00:00:01",
              "samplingPercentageDecreaseTimeout" : "00:00:01",
              "minSamplingPercentage": 0.1,
              "maxSamplingPercentage": 100.0,
              "movingAverageRatio": 1.0,
              "excludedTypes" : "Dependency;Event",
              "includedTypes" : "PageView;Trace"
            },
            "enableLiveMetrics": true,
            "enableDependencyTracking": true,
            "enablePerformanceCountersCollection": true,            
            "httpAutoCollectionOptions": {
                "enableHttpTriggerExtendedInfoCollection": true,
                "enableW3CDistributedTracing": true,
                "enableResponseHeaderInjection": true
            },
            "snapshotConfiguration": {
                "agentEndpoint": null,
                "captureSnapshotMemoryWeight": 0.5,
                "failedRequestLimit": 3,
                "handleUntrackedExceptions": true,
                "isEnabled": true,
                "isEnabledInDeveloperMode": false,
                "isEnabledWhenProfiling": true,
                "isExceptionSnappointsEnabled": false,
                "isLowPrioritySnapshotUploader": true,
                "maximumCollectionPlanSize": 50,
                "maximumSnapshotsRequired": 3,
                "problemCounterResetInterval": "24:00:00",
                "provideAnonymousTelemetry": true,
                "reconnectInterval": "00:15:00",
                "shadowCopyFolder": null,
                "shareUploaderProcess": true,
                "snapshotInLowPriorityThread": true,
                "snapshotsPerDayLimit": 30,
                "snapshotsPerTenMinutesLimit": 1,
                "tempFolder": null,
                "thresholdForSnapshotting": 1,
                "uploaderProxy": null
            }
        }
    },

Right? I don’t want to do sampling because I need to see whether everything has executed - so perhaps raise the log level.

I’m seeing the same issue for App service running on Linux.

@GustavoAmerico This is workaround, but ideally you shouldn’t be doing it yourself in azure functions, as telemetry configuration is done by azure functions host for you.

Any progress here? I’m seeing this too.