SqlClient: SqlDataReader.Close blocks in SNIReadSyncOverAsync after early breaking from reading large dataset

SqlDataReader is taking unproportionaly long time to Dispose if you are early breaking enumeration through large data set.

I can observer order of magnitude difference in Dispose time depending on number in top clause in select statement, but regardless of the number of rows actually selected.

Let’s have the following code:

    public static void Test()
    {
        IEnumerable<int> en1 = EnumerateLargeData(1000);
        // log time here
        foreach (int i in en1)
        { }
        // log time here

        IEnumerable<int> en2 = EnumerateLargeData(2000000000);
        foreach (int i in en2)
        { }
        // log time here
    }

    public static IEnumerable<int> EnumerateLargeData(int maxCount)
    {
        using (SqlConnection sqlConnection = new SqlConnection(connectionString))
        {
            sqlConnection.Open();
            using (SqlCommand getDataCommand = new SqlCommand("[dbo].[GetLargeData_SP]", sqlConnection))
            {
                getDataCommand.CommandTimeout = 0;
                getDataCommand.CommandType = CommandType.StoredProcedure;
                getDataCommand.Parameters.AddWithValue("@MaxCount", maxCount);

                using (var reader = getDataCommand.ExecuteReader())
                {
                    int recordsCount = 0;
                    while (reader.Read())
                    {
                        yield return 1;
                        if (recordsCount++ > 100)
                        {
                            //Here is where issue happens
                            break;
                        }
                    }
                }
            }
        }
    }

Where SP is selecting data from some very large table:

CREATE procedure [dbo].[GetLargeData_SP]
	@MaxCount INT
AS
BEGIN
	SELECT TOP (@MaxCount)
		[DataPoint]
	FROM 
		[dbo].[LargeDataTable] WITH(NOLOCK)
END

You will see a very large difference in Dispose time depending on the maxCount argument - especially when pulling data over slower netwrok. In my scenario I’m done fetching data in few hundreds milliseconds but then stuck in Dispose for 2 minutes. Exactly in this stack:

image

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 1
  • Comments: 17 (8 by maintainers)

Most upvoted comments

The correct behavior of a client talking to SQL Server over TDS is to either Cancel the command or consume the results. In the repro, the driver is cleaning up for the application by consuming the results, which is the safe option since it does not know what the executed statement is doing. Failing to consume the results is unsafe and can result in inconsistent application behavior. Calling Cancel would be a choice the application has to make, if it is safe to do so.

FWIW the same happens in Npgsql, and I’d expect it to be somewhat standard behavior across database drivers.

@divega we have #113 to track implementation of the new async APIs. However, while close/dispose of the reader could help not block the thread while results are consumed, it would still let the query run to completion, possibly using server and network resources, which isn’t ideal. The only way around that would be to trigger cancellation when the reader is disposed, but as @David-Engel wrote, it’s not right for the driver to do this since we don’t want what’s running and whether it’s safe to cancel.

My general response to this would be to not request a huge amount of results if one isn’t sure they’re going to be needed. SQL paging, cursors and similar mechanisms can be used to fetch results in chunks, providing natural “exit” points. Otherwise the application can trigger cancellation itself as mentioned above.

@David-Engel new CloseAsync and DisposeAsync APIs are being added to the ADO.NET provider model in .NET Core 3.0. I think these could help avoid blocking in a a future version of SqlClient, when you are able to target .NET Standard 2.1. Perhaps having a general issue in the backlog for implementing the new async surface would be a good idea (unless this already exists).

cc @roji

This is actually by design. The correct behavior of a client talking to SQL Server over TDS is to either Cancel the command or consume the results. In the repro, the driver is cleaning up for the application by consuming the results, which is the safe option since it does not know what the executed statement is doing. Failing to consume the results is unsafe and can result in inconsistent application behavior. Calling Cancel would be a choice the application has to make, if it is safe to do so.

This old blog post talks about failing to consume results from SQL Server with example scenarios: https://blogs.msdn.microsoft.com/psssql/2008/07/23/how-it-works-attention-attention-or-should-i-say-cancel-the-query-and-be-sure-to-process-your-results/

@jakrivan @WillFM

@AfsanehR the delay happens in Dispose code (so after breaking from enumerating) - the stack trace at the end of the first post here shows the exact stack where the Dispose consumes majority of the time.

All thats needed is a remote SQL server, or forwarded traffic via whatever remote box to your devbox (e.g. via ‘netsh interface portproxy add v4tov4 listenport=1433 connectport=1433 connectaddress=<devbox_ip>’ on the remote server) and you’ll be able to repro.

Hi @AfsanehR

So I tried on .NET core 2.1 with System.Data.SqlClient 4.5.0-rc against Microsoft SQL Server 2016 (SP1-CU8) (KB4077064) – 13.0.4474.0 (X64) on remote server with netwrok RT latency of about 55ms.

Here is the info output:

>dotnet --info .NET Command Line Tools (2.1.200)

Product Information: Version: 2.1.200 Commit SHA-1 hash: 2edba8d7f1

Runtime Environment: OS Name: Windows OS Version: 6.3.9600 OS Platform: Windows RID: win81-x64 Base Path: C:\Program Files\dotnet\sdk\2.1.200\

Microsoft .NET Core Shared Framework Host

Version : 2.0.7 Build : 2d61d0b043915bc948ebf98836fefe9ba942be11

I can still see a significant difference in disposal time based on top count:

C:\Program Files\dotnet>dotnet run --project myApp First case time: 00:00:00.9159470 Second case time: 00:00:07.9673436

It’s fully inline with my original measurements on .NET standard 4.6.2 (available here)

I run the code that I linked above (you can simply copy paste it with no need to tweak anything but the connection string).

Thanks Jan

EDIT: The network latency is likely one of the crucial factors for repro. I tried with local sql server and didn’t get a repro:

C:\Program Files\dotnet>dotnet run --project myApp First case time: 00:00:00.8501625 Second case time: 00:00:00.9905979

So it seems that with some nontrivial network latency buffers (on sql server side?) builds up and dispose code tries to traverse through those instead of simply throwing them away (guessing from the stack trace - especially functioons like ‘TryReadNetworkPacket’). Please use more distant sql server or simply forward your traffic via some proxy server.

Hi @AfsanehR

Sorry for confusion on my side - the repro is from .NET standard (4.6.2), will try also on .NET Core 2.1 RC and will let you know. If reproducible only on 4.7.2 I’ll move this item to dotnet/standard repo.

>dotnet --info .NET Command Line Tools (1.0.3)

Product Information: Version: 1.0.3 Commit SHA-1 hash: 37224c9917

Runtime Environment: OS Name: Windows OS Version: 6.3.9600 OS Platform: Windows RID: win81-x64 Base Path: C:\Program Files\dotnet\sdk\1.0.3

Thanks Jan

Hi @AfsanehR ,

I’ve post full repro sample here: https://jankrivanek456274688.wordpress.com/2018/05/14/beware-of-early-breaking-from-sqldatareader-reading-over-large-dataset/ (I’ve tried to post it to my msdn blog, but somehow it’s stuck in ‘Pending’ state and I don’t have publish right any more).

The number of originally requested rows and then the latency between server and client are the main factors influencing this behavior (as from the github published reader code it seems that it’s continuing to read over all the remaining rows). So you may be able to repro with your code - just increase the number of top rows at least 100 times.

Please let me know if you are still not able to repro.

Thanks Jan