runtime: Math.Sin(double) returning incorrect result on .NET Standard 2.0
I have put together a test case as proof. It passes on .NET Framework 4.5.1 and .NET Standard 1.5 (with a .NET Core 1.0 client), but fails on .NET Standard 2.0 (with a .NET Core 2.0 client). The test requires NUnit (I am using 3.7.1), but I have ruled out NUnit as the cause:
public static readonly double DEGREES_TO_RADIANS = Math.PI / 180;
public static readonly double RADIANS_TO_DEGREES = 1 / DEGREES_TO_RADIANS;
[Test]
public void CalcDistanceIncorrectOnNetStandard2_0()
{
double lat2 = 1.5707963267948966;
double dist1 = DistHaversineRAD(0, 0, lat2, 0);
double dist2 = ToDegrees(dist1);
Assert.AreEqual(90, dist2, 0);
}
public static double DistHaversineRAD(double lat1, double lon1, double lat2, double lon2)
{
// Check for same position
if (lat1 == lat2 && lon1 == lon2)
return 0.0;
double hsinX = Math.Sin((lon1 - lon2) * 0.5);
double hsinY = Math.Sin((lat1 - lat2) * 0.5);
double h = hsinY * hsinY +
(Math.Cos(lat1) * Math.Cos(lat2) * hsinX * hsinX);
return 2 * Math.Atan2(Math.Sqrt(h), Math.Sqrt(1 - h));
}
public static double ToDegrees(double radians)
{
return radians * RADIANS_TO_DEGREES;
}
The result of this line
double hsinY = Math.Sin((lat1 - lat2) * 0.5);
is -0.70710678118654746 on .NET Framework 4.5.1 and .NET Standard 1.5, but it is -0.70710678118654757 on .NET Standard 2.0. I have confirmed the result of (lat1 - lat2) * 0.5 is -0.78539816339744828 in both cases (the value being passed into Math.Sin(double)).
Note also the program is being ported from Java, and a similar test passes there with the same input and expected output.
Environment
.NET Command Line Tools (2.0.0)
Product Information: Version: 2.0.0 Commit SHA-1 hash: cdcd1928c9
Runtime Environment: OS Name: Windows OS Version: 10.0.15063 OS Platform: Windows RID: win10-x64 Base Path: C:\Program Files\dotnet\sdk\2.0.0\
Microsoft .NET Core Shared Framework Host
Version : 2.0.0 Build : e8b8861ac7faf042c87a5c2f9f2d04c98b69f28d
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 22 (17 by maintainers)
Microsoft has historically always sided with providing a high degree of backward compatibility over “new and improved”. Ok, so maybe this doesn’t have to be default calculation, but it should at least be available so it is possible to port applications from Java and also to support multi-targeted applications across .NET Framework and .NET Core without having different results. Can we put in some kind of configuration switch (
Math.BackwardCompatibleMode = trueorSystem.Compatibility.Math.Sin(double value)) so we don’t have to have ugly branching code just to get the same result as .NET Core 1.0, .NET Framework and Java?If not, can you at least point me to the implementation you had in .NET Core 1.0 so I can copy it? Not so much interested in speed and “accuracy” as I am concerned that my result is exactly 90 degrees so the test passes.
My guess would be, the author only added delta where they were forced to at the time. If that’s the case, they would have added delta in every assert if they were targeting .NET Standard 2.0.
Actually, rather than spending time recycling the floating point number documents, which last time I looked were completely adequate to explain how it works, it would make more sense to write a short blog post explaining why the implementation has changed since the last version. This issue is not about calling the math into question (which I concur is well within tolerance), but about a change in behavior, which in the Microsoft world happens less than once in a blue moon. In fact, I did a Google search before opening this and had there been such a post explaining why
Math.Sin()'s result is slightly different in .NET Core 2.0 that would have probably been enough to sway me from opening this issue in the first place.There are likely to be others on the receiving end of software with poorly written tests in it that will jump to the wrong conclusion like I did. After all, a test that has been passing for years suddenly failing sounds like cause for concern. And getting from “the author of the test had to have intended that” to “the author couldn’t have possibly intended that” is not always obvious.
IMO it would be helpful if .NET docs introducing floating point math would contain short theoretical introduction to numerical analysis with scope consisting of (i) representation of reals in computers, (ii) nature of reals representation errors and their magnitude, (iii) calculation with floats, (iv) how errors accumulate during calculations with floats, (iv) stability of numerical algorithms, (v) strategies for mathematically sound testing of numerical calculations with floats.
@NightOwl888 The following references should help to better understand problems we are discussing:
Goldberg, David. “What every computer scientist should know about floating-point arithmetic.” ACM Computing Surveys (CSUR) 23.1 (1991): 5-48.
Higham, Nicholas J. Accuracy and stability of numerical algorithms. Society for Industrial and Applied Mathematics, 2002.
Checking your test code:
the representation errors come from every function which operates on floats. Every operation on floats with error e will cause it’s accumulation according to approximate formula
k*ewhere k is a number of operations and e is an error expressed in number of digits or bits used in number representation. Therefore term 0 as an error level for the above tests is obviously incorrect and should be adjusted to correctly account for errors accumulated in calls toctx.makeRectangle,SpatialArgs.calcDistanceFromErrPct,90 * DEP, and180 * DEP. I am convinced that once you adjust test errors according to rules from above mentioned publications tests will pass and software will retain it’s functionality. As far as I can read code snippet DEP is an error term and is roughly 0,5 degree or Pi/360 so the range of allowable calculation error is quite large in comparison to representation used in code.