runtime: TimeSpan.FromSeconds does not return expected value after recent change to precision

Following dotnet/coreclr#24279 that improves precision of TimeSpan.FromMilliseconds, I’m having issues with compatibility with .NET Framework, and more importantly weird inconsistencies in test code:

.NET Framework 4.7.2

Console.WriteLine(TimeSpan.FromSeconds(78043.43));
Console.WriteLine(TimeSpan.FromMilliseconds(78043430));
Console.WriteLine(TimeSpan.FromSeconds(78043.43) == TimeSpan.FromMilliseconds(78043430));

outputs:

21:40:43.4300000
21:40:43.4300000
True

.NET Core 3.0 preview 7:

Console.WriteLine(TimeSpan.FromSeconds(78043.43));
Console.WriteLine(TimeSpan.FromMilliseconds(78043430));
Console.WriteLine(TimeSpan.FromSeconds(78043.43) == TimeSpan.FromMilliseconds(78043430));

outputs:

21:40:43.4299999
21:40:43.4300000
False

I don’t really mind the fact that there will be a difference in behavior between netfx and .NET Core (loss in precision vs not), but the main issue for me is that calling TimeSpan.FromSeconds(CONSTANT) and TimeSpan.FromMilliseconds(CONSTANT * 1000) on 3.0-preview7 does not return the same value (different Ticks).

This caused a lot of failing unit tests similar to Assert.That(...., Is.EqualTo(TimeSpan.FromSeconds(123.45))) that pass on netfx but not in .NET Core due to the weird behavior.

Looking at the implementation:

  • TimeSpan.FromSeconds(78043.43) calls TimeSpan.Interval(78043.43, 10000000) which computes double millis = 78043.43 * 10000000 which gives 780434299999.99988, but then is truncated to a long, producing 780434299999 ticks, which is off by one.
  • TimeSpan.FromMilliseconds(78043430) calls TimeSpan.Interval(78043430, 10000) so millis is now 780434300000 which does not change when truncated to a long.

remark: maybe the variable should now be renamed into ‘ticks’ instead of ‘millis’ here?

Maybe the +0.5/-0.5 rounding should still be applied, but on the ticks before truncating?

About this issue

  • Original URL
  • State: open
  • Created 5 years ago
  • Reactions: 1
  • Comments: 29 (16 by maintainers)

Most upvoted comments

This has to do with double being the closest representable value and the gap between values increasing as the magnitude increases.

There are some tricks that could be done to keep it accurate without incurring the rounding loss/etc, but they may come at a perf cost.

.NET Framework has “many” bugs around floating-point parsing and formatting that resulted in precision being lost and incorrect rounding. These are bugs that cannot be fixed due to the very high compat bar that .NET Framework has.

.NET Core, since 3.0, has fixed these bugs and documented the bug fix (breaking change). There was a blog post that went out at the time describing the change, the why, and some of the problematic scenarios that could be hit: https://devblogs.microsoft.com/dotnet/floating-point-parsing-and-formatting-improvements-in-net-core-3-0/

The simplest example is that .NET Framework will not correctly handle double.Parse(Math.PI.ToString()). The returned value did not equate to Math.PI and resulted in “silent” truncation and loss of precision, which was not IEEE 754 standards compliant.

.NET Core 3.0 and later ensures that the value roundtrips by default and so there is no loss in precision in the displayed/formatted string. The actual computation for 1000000.0d * 1.005d hasn’t changed, only what value gets displayed when you convert it to a string.

It was strictly better for some inputs and worse for others. We should not “fix” this by putting back in still broken behavior.

We should fix this properly and ensure that it works for both large and small values, which I gave a description of how to do in my last comment.

@tannergooding The old way was strictly better.

The rounding used in old .NET runtime resulted in 0.25 ticks average rounding error when converting from double to ticks, and no systematic bias. The truncation used in the current version results in 0.5 ticks average rounding error which is twice as high, and it also introduced a systematic bias towards zero.

Here’s a demo of that systematic bias caused by truncation.

using System;

static class Program
{
	static void Main( string[] args )
	{
		TimeSpan initial = TimeSpan.FromMinutes( 1.23456 );
		TimeSpan ts = initial;
		for( int i = 0; i < 1000000; i++ )
		{
			ts = TimeSpan.FromSeconds( ts.TotalSeconds * Math.PI );
			ts = TimeSpan.FromSeconds( ts.TotalSeconds / Math.PI );
		}

		TimeSpan diff = initial - ts;
		Console.WriteLine( "Difference: {0} ticks, {1} ms", diff.Ticks, diff.TotalMilliseconds );
	}
}

The old runtime prints 0 ticks, 0 ms, the .NET 6 runtime prints 564209 ticks, 56.4209 ms.

The correct answer is zero: the body of the loop first multiplies the timespan by a constant, then divides by the same constant.