multiprecision: variable precision arithmetic between complex and rational yields incorrect precision

I’m using Boost 1.75, via Homebrew. I have a test failing in my code, because the precision of the result of mixed-type arithmetic (multiplication at least) between complex numbers and rational numbers is incorrect. Code for test:

#include <boost/test/unit_test.hpp>
#include <boost/multiprecision/mpc.hpp>


BOOST_AUTO_TEST_SUITE(boost_multiprecision)
BOOST_AUTO_TEST_CASE(precision_complex_rational_mul)
{
	using mpq_rational = boost::multiprecision::number<boost::multiprecision::backends::gmp_rational, boost::multiprecision::et_on>;
	using mpc_complex = boost::multiprecision::number<boost::multiprecision::backends::mpc_complex_backend<0>, boost::multiprecision::et_on>;

	mpc_complex::default_precision(30);

	mpq_rational a(1,2);
	mpc_complex b(0,1);

	mpc_complex c = a*b;

	BOOST_CHECK_EQUAL(c.precision(),30);
}


BOOST_AUTO_TEST_SUITE_END() // numtraits tests

The precision of c should be 30 – it’s coming to me as 646392383.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 17 (1 by maintainers)

Commits related to this issue

Most upvoted comments

Well… I never said the constructor was a good solution: like you say, it’s not generic.

I’m contemplating in addition to a ::default_precision() member, adding some member flags that control this behaviour. The alternative is a non-member cast operator, but then this is Boost.Multiprecision specific, where as a flag would be set at the same time as you set precision, before any generic code is called. As a stopgap, if you have control of the generic portion of the code, then there’s nothing to stop you from writing a cast operator now - the default version would do nothing, then overload for variable precision Multiprecision types.

Confirmed.

I have a partial fix, but further testing reveals a whole swathe of situations where the current situation breaks down.

To recap, we have assignment-preserves-precision-of-source. And this should all be working now for same-type operations.

However, things get screwy with integer and rational types. The question is basically this: if I assign an integer to a variable precision floating point (or complex) type, then what is the precision of the result? Options are:

Variable precision integers:

  1. Sufficient precision to hold all the bits set in the integer.
  2. As (1), but never less than current default precision.
  3. Current default precision.

Option (1) is a non-starter as this fails catastrophically:

float_type one(1), ten(10).
float_type point_one = one / ten;  // one decimal digit precision!!

We currently do (2).

For fixed precision integers:

  1. Sufficient precision to hold all the bits set in the integer.
  2. The precision (digits10) of the integer.
  3. As (1), but never less than current default precision.
  4. As (2), but never less than current default precision.
  5. Current default precision.

1 and 2 are non starters as before, we’re currently somewhere around (4), modulo any bugs. For consistency, I’m considering (3).

Rationals: It’s not at all clear to me what we do here, except possibly to consider the larger of the precision (however we measure it above) of the 2 components, plus current default. There is probably a stronger case to stick to current default precision here given that the result may well be irrational anyway.