azure-cosmos-dotnet-v3: Endless GET traces on ApplicationInsights

We are experiencing strange behavior in our different environments even if they are equally configured. Thanks to ApplicationInsights we encountered traces like: 144460126-7c8aa127-1f09-4d84-b6c2-6d10ad26f90d-edit

The services create the CosmosClient as follow (DatabaseClientHelper is used to create a Singleton of the DatabaseClient):

public static class DatabaseClientHelper
{
    public static Database CreateDatabaseClient(string uri, string authKey, string databaseName)
    {
        var cosmosClient = new CosmosClientBuilder(uri, authKey)
            .WithConnectionModeDirect()
            .WithRequestTimeout(TimeSpan.FromSeconds(10))
            .WithThrottlingRetryOptions(
                maxRetryWaitTimeOnThrottledRequests:TimeSpan.FromSeconds(10),
                maxRetryAttemptsOnThrottledRequests: 5)
            .WithCustomSerializer(new P0NewtonsoftSerializer())
            .Build();
        return cosmosClient.GetDatabase(databaseName);
    }
}

public class P0NewtonsoftSerializer : CosmosSerializer
{
    private readonly JsonSerializer _serializer;

    public P0NewtonsoftSerializer()
    {
        _serializer = JsonSerializer.Create(
            new JsonSerializerSettings()
            {
                Converters = new List<JsonConverter>() { new StringEnumConverter(), new MultiAggregateRootConverter() }
            });
    }

    public override T FromStream<T>(Stream stream)
    {
        using StreamReader reader = new StreamReader(stream);
        using JsonTextReader jsonReader = new JsonTextReader(reader);
        return _serializer.Deserialize<T>(jsonReader);
    }

    public override Stream ToStream<T>(T input)
    {
        MemoryStream stream = new MemoryStream();
        using StreamWriter writer = new StreamWriter(stream, leaveOpen: true);
        using JsonTextWriter jsonWriter = new JsonTextWriter(writer);
        _serializer.Serialize(jsonWriter, input);
        jsonWriter.Flush();
        stream.Seek(0, SeekOrigin.Begin);
        return stream;
    }
}

We extracted from AI how many traces of that GET we have on our environments. This is the Kusto query used:

union isfuzzy=true dependencies
| where timestamp > ago(365d)
| where type in ("Azure DocumentDB")
| where name in ("GET")
| order by timestamp desc
| summarize count() by bin(timestamp, 1d)
| render timechart 

And these graphs are the results from our three developments environments: image image image

Our stage environment (we deploy every 2 week, the last deploy was on 02/12, only today we started experiencing so many traces of GET): image

This is from the production environment: image

Could you help us understand the root cause of these different traces and how to configure our environments to prevent these endless GET traces?

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Comments: 19 (9 by maintainers)

Most upvoted comments

@ealsur Thank you for the detailed answer. You answered my question. I was wondering if we could configure the log level for event traces. Thanks 😊

@YohanSciubukgian which logs are you referring to? The SDK produces Event Traces with different log levels, users can configure their Event Listeners to listen for a particular level. AI tracing/logging HTTP GET operations might be a different thing, these are not logs, these are background requests that are expected to happen, if AI is logging them, I don’t know if it allows to filter them. My guess is that this is a by-design behavior to log all HTTP requests.