xtensor-python: Changing dtype in example causes "Internal error"

First of all, it was really easy to get started with xtensor-python using your cookiecutter template. Thanks!

I tried modifying one of the examples in a seemingly trivial way: I replaced double with uint8. And now it crashes! Any idea what’s going on?

Here’s the condensed example, adapted from example2():

#include "pybind11/pybind11.h"
#include "xtensor/xarray.hpp"

#define FORCE_IMPORT_ARRAY
#include "xtensor-python/pyarray.hpp"
#include "xtensor-python/pyvectorize.hpp"

namespace py = pybind11;
using std::uint8_t;

// I'm using uint8_t here instead of double
inline xt::pyarray<uint8_t> example2(xt::pyarray<uint8_t> &m)
{
    return m + 2;
}

PYBIND11_PLUGIN(example_module)
{
    using namespace pybind11::literals;
    xt::import_numpy();
    py::module m("example_module", "");
    m.def("example2", example2);
    return m.ptr();
}

I tried to run this test program:

import numpy as np
from example_module import example2

a = np.zeros((10,), np.uint8)
b = example2(a)

And it crashes with the following error:

Traceback (most recent call last):
  File "test2.py", line 5, in <module>
    b = example2(a)
RuntimeError: Precondition violation!
Internal error: trivial_assigner called with unequal types.
  /miniforge/envs/flyem-forge/include/xtensor/xassign.hpp(377)

I tried other dtypes, too:

Crashes: uint8_t, uint16_t, int8_t, int16_t No crash: uint32_t, uint64_t, int32_t, int64_t, float, double

Setup details: I’m using OSX and clang with libc++, with the following conda packages:

numpy                     1.13.1                   py36_0
python                    3.6.2                         0
xtensor                   0.12.1                        0    conda-forge
xtensor-python            0.14.0                        0    conda-forge
pybind11                  2.1.1                    py36_0    conda-forge

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 20 (12 by maintainers)

Most upvoted comments

Hi @stuarteberg thanks again for the report. I found the issue, and I am working on a fix. I hope I’ll be able to push a fix today.

Cheers,

Wolf