opencv: Very large image not supported (area > 2^32)

  • OpenCV => 3.3.1
  • Operating system => Windows 10 Pro (64bit)

Related forum link http://answers.opencv.org/question/178840/very-very-large-image-causes-issues-with-function-int-type-related/

Detailed description Large images with an area greater than 2^32 cause crashes or do not work in some functions. A simple example is the cv::threshold method with Triangle method flag: The operation size.width *= size.height; is done (in file imgproc/src/thresh.cpp, line 1099), but in the case of large images, this overflows the “small” integer int.

Moreover, images with area greater than 2^30 cannot be opened (see imgcodecs/src/loadsave.cpp, line 65). As a workaround, I just modified the mentioned line with #define CV_IO_MAX_IMAGE_PIXELS((uint64)1 << 40, but this can’t be a long term fix.

Suggestions Maybe using other types (like int64 or size_t) for size-related (or maybe index-related) code may be a good idea. Maybe this can be activated via a preprocessor directive.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 15 (7 by maintainers)

Most upvoted comments

It will be impossible to fix across the entire OpenCV library.

OpenCV can be seen as having several layers of “business value”.

Among them, one of the layer is the definition of cv::Mat and cv::Mat_<T> (and also the related cv::Matx_<T, M, N>). This specifies a common multi-dimensional array type for in-memory objects, and is used for all communication purposes between the API functions of OpenCV.

Another of the layer is the vast collection of image processing functions. All of them operate on cv::Mat or cv::Mat_<T>.

The issue of assuming that memory address operations don’t involve offsets (sizes, lengths, differences) more than 32-bit affects all layers. So, even if you fix the issue in cv::Mat_<T> and cv::Mat, all of the image processing functions will still fail. Therefore you will have to fix the image processing functions one by one.

Thus, if it is a job necessity, then the first priority is to identify a minimum subset of OpenCV API that needs to be fixed, and then reimplement those functionalities by taking the code from OpenCV and modify it, or to completely reimplement the functionalities.

A typical approach is to use tiling. This involve defining your own large matrix data type, say your_cv::BigMat_<T> which mirrors cv::Mat_<T> except that large matrix dimensions and pointer offsets are supported. Then, define functions which perform copying and pasting sub-rectangles between your BigMat and OpenCV Mat. These copy and paste operations can only operate sub-rectangles smaller than the OpenCV size limit.

Years ago I had implemented a tiling approach to workaround a limitation in OpenCV function called cv::remap which is used for image rotation. That API function contains SSE2 instructions which had a limit of 32768 bytes per row of pixel. The approach involves copying and pasting sub-rectangles while staying within OpenCV limit. The code is https://gist.github.com/kinchungwong/141bfa2d996cb5ae8f42

OpenCV has not yet adopted a library-wide framework for very large matrices or tiled matrices yet. Usually, it is up to the users of OpenCV to implement such framework.

done! see #11505

@FlorentTomi I see, that’s why newer version has assertion statements to void the unintentional crash. Definitely the data type need to be updated so it can handle large images, which are “large” at this time but for next 5 years I believe people, including scientists, wouldn’t think those are large as technology updates.