rust-decimal: I want arithmetic ops which return error instead of losing precision (was: I want arithmetic ops which never lose precision)

I use rust-decimal for monetary calculations. rust-decimal sometimes lose precision, and I don’t like that. For example, 1.0000000000000000000000000001 * 1.0000000000000000000000000001 is 1.0000000000000000000000000002 under rust-decimal (this is incorrect from mathematical point of view). (I used 1.23.1.) Even checked_mul gives same answer instead of returning error. I want some function (say, exact_mul and similar names for addition and subtraction), which returns Ok (or Some) when a result can be exactly represented and Err (or None) if not.

In my program (assuming my program is bug-free) such precision losing should never happen. But as we all know we can never be sure that there is no bugs! So I want this exact_mul. I would replace my checked_mul with exact_mul (with unwrap). And if I ever lose precision, I will discover this, because I will see panic

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 17 (8 by maintainers)

Most upvoted comments

This is actually the same issue as #511 which got raised the other day too.

Effectively, what we’re talking about here is underflow handling. By default, rust_decimal will attempt to round underflow if it can - this is typically a useful feature, for example 1 / 3 is difficult to represent (without storing ratios) so we instead try to “round off” the underflow to “make it fit”.

Furthermore, multiplication is a bit different to other operations. While it’s true that we reserve 96 bits for the mantissa representation within a standard decimal, we effectively reserve 192 bits for the product. This is because multiplication can naturally increase the scale - e.g. 1.1 * 1.1 = 1.21. To convert it back to 96 bits we need to (if we can) scale back and/or round to make the underflow fit.

The checked_ functions currently handle the overflow cases. This is by design since underflow isn’t typically an error case. That being said, I can understand the want/desire to maintain underflow precision and know if it is in fact unable to be represented without rounding/scaling.

There are a couple of ways for going about this. The first is to modify the bitwise functions to make underflow handling optional. This could then be exposed either via a feature flag or an explicit function. My concern with this approach is that it is very limited in scope - it’s relatively easy to underflow, and sometimes in ways that have no meaningful difference (e.g. 1.00000000000000000000 * 2.00000000000000000000 technically underflows).

The second is to provide “delayed boxing” functionality. That is, keeping the number in it’s raw Buf24 state (or whatever) until time to “evaluate” the Decimal. This would technically allow some operations that would underflow to be potentially recovered (e.g. via a round). The reason I like this approach slightly more is because there are times that you want to maintain a high precision until the very end (e.g. powd).

Anyway, all this to say - there isn’t a “quick fix” to this right now - it’s currently working by design. It’s on my todo list to take a look at the fundamentals of rust_decimal in prep for v2 (i.e. alternative storage formats as well as some of the features we talked about) however that’s still a while away.

I will go on to say that bigdecimal.rs may be more appropriate for your current use case if performance isn’t a concern since that effectively uses a BigInt behind the scenes and allows for a much higher scale.

In this case this is the wrong library for you. rust-decimal is not an infinite-precision library. It is fixed-precision, only with a large number of significant digits (I believe up to 96 bits). However, you will run out of precision.

An arbitrary precision numeric library is what you want - but you need to be careful. Some numbers (even rational numbers) cannot be represented in a finite string of decimal digits (e.g. 1/3, PI). And computers have finite memory. Therefore you physically CANNOT have zero loss of precision. If you think so, you’re deluding yourself.

Therefore, the conclusion is: if you’re going to lose precision anyway, what you really need is to restrict the amount of precision loss and make sure your loss is small enough to be tolerated.

Just to reemphasize: THERE IS NO “NEVER LOSE PRECISION”. It is impossible.

You can only get: Lose very very very little precision, so small that it doesn’t matter to anybody.

You mean https://docs.rs/bigdecimal ?

Yep, that’s the one I was referring to!

I simply wanted to know whether this feature will be added in reasonable time or I should simply write my own lib. So I will go writing my lib. You may close the bug

Adding this feature is definitely on the roadmap as it has come up a few times however when is still open for discussion. I’d like to take a look at this (and surrounding issues) this month however it all depends on how my work schedule pans out! I’ll keep this issue open for the meantime as it helps me also group by demand.

If you did want to have a go at adding the feature instead of writing a new lib then then branching logic for mul underflow is here:

https://github.com/paupino/rust-decimal/blob/master/src/ops/mul.rs#L132

The rescale function that does the actual rescaling/rounding is here:

https://github.com/paupino/rust-decimal/blob/master/src/ops/common.rs#L337

Either way: good luck and thanks for creating an issue!

Okey, I removed one zero. Result is same. I. e. the following code passes to end instead of panicking.

fn main() {
    use rust_decimal::Decimal;
    let a: Decimal = Decimal::from_str_exact("1.000000000000000000000000001").unwrap();
    let b = a.checked_mul(a);
    let c = b.unwrap();
    assert_eq!(c, Decimal::from_str_exact("1.000000000000000000000000002").unwrap());
}

rust-decimal is not an infinite-precision library

@schungx . I want fixed precision library, not infinite precision one. I don’t want to always retain full precision (edit: I don’t want to have arbitrary big precision). I simply want a library to report error when the library would lose precision. (I just changed bug title to reflect this.) Infinite precision libs are too slow for me.

make sure your loss is small enough to be tolerated

I think I was clear enough in bug description. My application has zero tolerance to precision loss. My application deals with money. Its only purpose is to verify that two given monetary amounts are absolutely equal. But I can tolerate panics, assertion faults, etc. They are okey. If I will encounter panic, I will simply dig into data and try to understand why the panic happened and try to do something. Panics are better that silent precision loss I will never know about