runtime: Decimal type not able to parse scientific notation
The problem is shown in the following code:
Double doubleVal = 0.0;
if (Double.TryParse("1E-05", out doubleVal))
{
//This part is reached because Double can parse scientific notation
}
Decimal decVal = 0m;
if (Decimal.TryParse("1E-05", out decVal))
{
//This part isn't reached because Decimal cannot parse scientific notation
}
The solution should be straightforward (i.e., already a working-version in a similar type). Additionally, note that I am somehow familiarised with this part of the code because of being currently involved in other Decimal
parsing issue (https://github.com/dotnet/coreclr/issues/2285).
About this issue
- Original URL
- State: closed
- Created 9 years ago
- Comments: 23 (16 by maintainers)
@varocarbas Please understand that this is not a one-way street where you can decide where the conversation and discussion goes. If you don’t like the way this discussion goes, please refrain from further responding. Even when you raised this issue it does not mean it is your issue alone.
In a project the size of CLR, there’s far more expertise than that required for proper management. I was reminded of a quote recently
Which is where b/c breaking fixes fall.
Even in an OSS environment, you need to keep a business mind about it.
You can argue that at a technical level, the correct way would be to have consistent behaviour between
Decimal
andDouble
but if implementing that change breaks your “Customer’s” applications then it’s not a change you can make - even if you can argue at a technical level that those customers are idiots for relying on broken behaviour, breaking their apps isn’t something that’ll go down well unless you’ve got an incredibly strong justification.One ill-considered small “improvement” can cause millions of developers headaches, and potentially cost businesses and people a lot of money.
As an incredibly simplistic example, imagine trying to transfer 135 in your Internet banking, and you accidentally enter 1e5. The current behaviour would return false, with your change the user would be looking at a fairly sizeable overdraft request.
@RobThree Sorry but I will try to stick to my original intention of plainly not replying to certain comments because all this stopped making sense long time ago. You keep talking generally and repeating abstract ideas without wanting to properly understand the specific situation (and what is more important: without respecting my numerous petitions of “please, discuss all what you want; but don’t involve/try to convince me”).
You keep saying “really little impact” and we keep trying to tell you that the impact is not as “little” as you think / pretend. Maybe the impact on the .Net framework is “little” (probably even non-existing) but you need to be reminded that there are literally thousands and thousands of applications on millions and millions of computers with millions and millions of users relying on this framework. Even if .01% of the applications (knowingly or unknowingly) rely on this behavior that would mean a shitstorm of bugreports (or planes falling from the sky or…) just because someone thought “hey, let’s change this behaviour for no actual benefit at all, just because I think this parsing thingy should be more strict”. That’s not how this works; end-of-story.
What you do in your own applications: all up to you. Go nuts. It’s your customerbase. But (especially large (in userbase terms)) libraries / frameworks don’t. work. that. way. (at least, if you’re a responsible developer).
@Havvy Thanks for your inputs. I do get what you mean, but you should understand that I cannot be answering here each single comment (lots of them) mainly when this wasn’t my intention in a first moment.
You have to bear in mind that this specific suggestion has a really little impact as far as is already being supported (by
Double
and by various overloads ofDecimal
); and that’s why I don’t think that general not-breaking-old-codes ideas are not fully applicable here.Anyway and as said, please understand that I don’t want to discuss about certain issues.
@RobThree If you want to systematically rely on these ideas (“better don’t correct anything because might provoke the craziest code to crash”), you wouldn’t ever modify old functionalities. Such an attitude would represent serious restrictions in the adaptability of the final product (which theoretically aims to be as adaptable as possible). It would represent an extremely static attitude whose main goal wouldn’t be to evolve/adapt/improve, but to plainly focus on proving that the errors are actually not so bad. In fact, this has happened already in the recent past (i.e., VB6 and .NET): relying on not-too-good (or plainly wrong) approaches for as long as possible, focusing on maximising your monopolistic position (i.e., Windows) and then moving away to a different (objectively better) format only when strictly required. Do you want to continue like this? Who am I to correct you? You are extremely big and your product is certainly good. Although I might wonder about the exact essence of your “open source“ attitude: the code is there (and I have it and I will certainly use it for my personal joy; thanks again, Microsoft, you have made me really happy 😃), but is the product really open source if no changes can be performed? Or no need to go so far: I might plainly decide that the main guidelines of this project do not meet my expectations and might not want to participate in it.
Is there a real backwards-compatibility problem with what I am suggesting here? Saying that there will be not even a single problem would be lying; nobody can tell this for sure. On the other hand, you have to draw the line at some point and not supporting bad coding practices (or user misbehaviours) seems like a good place to start. In the example you are proposing, a user inputs “1e5” in a textbox where he is expected to input a number and the application crashes; would this user rightfully feel bad about it? In a generic piece of software which is not too adaptable and has tons of limitations? I don’t think so. In a highly adaptable program perfectly understanding any user input? Users would certainly expect these applications to work fine anyway; but in that scenario, the given programmers would have taken care of this and many other issues already (i.e., you cannot build a highly adaptable piece of software by blindly trusting in what the given programming language delivers, mainly when talking about not-too-clear functionalities).
The proposed modification has a very little impact at all the levels and represents the kind of logical, intuitive and consistent behaviour which the .NET languages are expected to deliver. The fact of having 2 types like
Double
andDecimal
with different overloads and only some of them supporting certain format (scientific notation) doesn’t make any sense. Not within an intuitive and reliable enough framework.In any case and as explained in my other thread (the one you are linking above), I am certainly not interested in coming into this kind of discussions. You are free to discuss as much as you want about all these issues and to make your final decisions upon any issue. But please don’t think that you can convince me of something about which my ideas are crystal clear (i.e., my most basic convictions regarding how this kind of problems should be faced); I am not trying to convince you either (just to understand if you are willing to deliver what I am willing to accept or not). Think carefully about all the involved issues, discuss for as long as you wish (but try to not involve me in your discussions unless when dealing with strictly technical issues), make whatever decision and let me know about it.
What you’re not understanding is that your proposed changes are breaking changes. People, competent or not, are relying on some behaviours knowingly or sometimes maybe even _un_knowingly.
As a simple (maybe a bit contrived but stick with me) example: you can imagine someone programming a textbox desiring an input from 0 to 999. So they set the
MaxLength
property to 3 and be done with it; letters and other characters will fail and be handled accordingly. Then they pass the user’s input troughdouble.Parse
since they need a double a few lines down (so going throughint.Parse
and then casting to double is the long way around) and are quite confident the input is anything from 0 to 999. Now you come along and change this behaviour and suddenly a user can input1e9
. The ramifications can be anything from maybe only a shoesize now being way of the charts or maybe you’ve opened the gates to hell and introduced a security issue (similar to, for example, a buffer overflow).Yes, the programmer using such an approach should’ve done a more rigorous and decent job and yes there’s a lot wrong in this example. And, yet, maybe that program has actually sold millions of copies and now suddenly millions of people are affected just because someone decided to change the behavior. The (your) intentions are (or were) good, the side-effects however are bad.
Such changes require, as proposed, shims, maybe even a (new overload with a) extra argument to the
(Try)Parse
methods that allow you to specify the desired behavior or maybe even a whole new method needs to be created (e.g.(Try)ParseScientific(...)
or something similar).Nobody is saying that the current behavior is absolutely perfect and nobody is saying there’s nothing wrong with current behavior. They are, however, saying that such changes can’t be made on a whim or go overnight. It needs careful consideration and deliberation.
Thanks for your contributions, but seriously this is not what I want.
I have started some threads which I consider completely clear. If I am wrong and this community doesn’t think like me, I would not implement anything (and, most likely, wouldn’t contribute further).
I like objectivity-driven communities where all their members have similar knowledge and expectations and where subjectivity is not favoured. The open .NET seems a perfect excuse for objective-correctness-focused discussions where everyone would win (Microsoft by getting a beyond-excellent product and the contributors by working on so worthy resources; seriously, after taking a quick look at CoreFX & CoreCLR I am speechless); this situation is certainly very appealing to me. Participating in random chats with random people (no offense) about random issues is not what I want.
Please, if you have solid enough expertise related to what is being discussed here (e.g., efficient algorithm building, deep .NET knowledge mainly regarding this specific implementation, you are an local decision-maker, etc.) and you want to share anything with me (question, suggestion, request, etc.) from a completely technical and objective perspective, please feel free to contact me. Otherwise, I would not answer you. As said, no disrespect intended, just trying to avoid everyone wasting their time.
@varocarbas I think the point you are missing is the issues we raise against your ideas ARE objective by any rational definition of the word.
It’s a nice idea, but unfortunately we really don’t have the luxury of making breaking changes like this at a whim. We have a more nuanced policy than “just fix it” when there’s billions of PC’s with Windows installed which are depending on the consistent behavior of low-level API’s, and for those to not change underneath them (yes, even in a way that “makes more sense” than before).
If we want to make changes like this, we have to put some sort of compatibility shim in place so that older consumers get the old behavior and new consumers get the new behavior. I think we usually reserve that sort of thing for very important bug fixes.
1.2 is a double literal in the context of a language like C#. The idea of a “double literal” makes no sense in the context of BCL’s parsing.
Why is this difference arbitrary? decimal and floating point types are very different in terms of functionality and use. If you go to a bank and write 1E5 on a check you’ll probably get some strange looks.
@akoeplinger But these persons deserve to get an error because of having implemented a so illogical approach. LOL.
This is expected, see here: http://stackoverflow.com/questions/3879463/parse-a-number-from-exponential-notation