geometry: Incorrect result of the function bg::intersection()

It is compiled by MSVC2017, Boost version: 1.82. But the result of the function bg::intersection() was wrong. The poly12 obtained in the following code is incorrect.

void fun()
{
	typedef bg::model::d2::point_xy<double> DPoint;
	typedef bg::model::polygon<DPoint, false> DPolygon;
	typedef bg::model::multi_polygon<DPolygon> MPolygon;
	DPolygon poly1;
	DPolygon poly2;
	MPolygon poly12, poly21;
	std::list<DPoint> lstOf = boost::assign::list_of
	(DPoint(193.57019478631327, 132.59017250377539))
		(DPoint(207.72957099999999, 125.50837199999999))
		(DPoint(227.94565200000000, 129.20828800000001))
		(DPoint(253.72333800000001, 133.70304100000001))
		(DPoint(273.04274400000003, 142.37692500000000))
		(DPoint(237.02923500000000, 141.16036299999999))
		(DPoint(235.71143835000001, 140.78686934999999))
		(DPoint(235.71143834104575, 140.78686934746216))
		(DPoint(235.16649600000000, 140.57998100000000))
		(DPoint(234.62243732380651, 140.42779924762760))
		(DPoint(221.02097041896951, 136.62325543831733))
		(DPoint(216.04524400000000, 135.21302100000000))
		(DPoint(212.46779402999999, 134.76212731000001))
		(DPoint(212.46779340840936, 134.76212723165614))
		(DPoint(202.14415900000000, 133.40320600000001))
		(DPoint(200.49203299713540, 133.25272953098235))
		(DPoint(200.49203291000001, 133.25272952000000))
		(DPoint(198.35886600000001, 132.98387000000000))
		(DPoint(194.25756100000001, 132.65008200000000))
		(DPoint(193.57019478631327, 132.59017250377539));
	poly1.outer().assign(lstOf.begin(), lstOf.end());
	lstOf = boost::assign::list_of
	(DPoint(230.01056645925340, 165.85545400000001))
		(DPoint(230.01056645925340, 129.56833895494114))
		(DPoint(240.65447920262758, 131.42427585847156))
		(DPoint(240.65447920262758, 165.85545400000001))
		(DPoint(230.01056645925340, 165.85545400000001));
	poly2.outer().assign(lstOf.begin(), lstOf.end());
	bg::correct(poly1);
	bg::correct(poly2);
	bg::intersection(poly1, poly2, poly12);
	double Area1 = bg::area(poly1);
	double Area2 = bg::area(poly2);
	double Area12 = bg::area(poly12);
}

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Comments: 16 (1 by maintainers)

Most upvoted comments

I’m still not that familiar with the overlay code. If I understand it correctly, the segment_ratio is computed in cartesian_segments::unified(…) via Cramer’s rule and later used for sorting via segment_ratio::operator<(…) either based on the approximation or segment_ratio::less.

I think my approach to guarantee correct (and thereby consistent) behaviour for all cases would be as follows:

  • At the initial segment_ratio computation, we need to
    • Compute rigorous error bounds for the segment_ratio approximation and store them in some double err such that the true ratio (which may not be representable in double) is within [approximation - err, approximation + err].
    • Store some reference or point to the original segments.
  • In the comparison, we need to
    • Replace close_to by checking whether the err-radius intervals around the segment_ratio approximation overlap. They should only do this in degenerate cases. If they do not, the comparison of the approximations is sufficient.
    • If they do, the ratio inequality with the original points can be rearranged such that all fractions disappear and evaluated in a numerically robust manner. This doesn’t need to be done with full precision right away; the precision can be increased incrementally as needed, and there are probably some shortcuts to check.

This would require some time to implement. I cannot do it right now.

I expect no performance penalty for the general case, possibly even a speedup if these code paths are critical and the number of branches can be reduced. In degenerate cases, there could be a performance penalty for inputs that require the evaluation of all digits, but this penalty would only apply in cases that might currently be computed incorrectly.

The behaviour would be correct with respect to the input coordinates, but I lack the general view of the overlay implementation that would be required to rule out new inconsistencies that might arise between such exact segment_ratio comparisons and other parts of the algorithm that still deliberately use tolerances like, e.g. the computation of clusters.