superpoint_transformer: Fail to install FRNN

Hi Damien, Thanks for the brand new v2! I am trying to run install.sh to install all the dependencies, but there seems to be an error when it comes to FRNN installation.

/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/utils/cpp_extension.py:425: UserWarning: There are no g++ version bounds defined for CUDA version 11.8
  warnings.warn(f'There are no {compiler_name} version bounds defined for CUDA version {cuda_str_version}')

and here is a part of the error info:

/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h: In instantiation of ‘void torch::nn::ConvNdImpl<D, Derived>::reset_parameters() [with long unsigned int D = 1; Derived = torch::nn::ConvTranspose1dImpl]’:
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h:101:5:   required from ‘void torch::nn::ConvNdImpl<D, Derived>::reset() [with long unsigned int D = 1; Derived = torch::nn::ConvTranspose1dImpl]’
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h:33:8:   required from here
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h:105:27: error: cannot convert ‘const torch::enumtype::kFanIn’ to ‘int’
  105 |     init::kaiming_uniform_(
      |     ~~~~~~~~~~~~~~~~~~~~~~^
      |                           |
      |                           const torch::enumtype::kFanIn
  106 |         weight,
      |         ~~~~~~~            
  107 |         /*a=*/std::sqrt(5)); // NOLINT(cppcoreguidelines-avoid-magic-numbers)
      |         ~~~~~~~~~~~~~~~~~~~
In file included from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:5,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:16,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/extension.h:5,
                 from /root/autodl-tmp/superpoint_transformer/src/dependencies/FRNN/frnn/csrc/bruteforce/bruteforce_cpu.cpp:2:
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/init.h:99:17: note:   initializing argument 3 of ‘at::Tensor torch::nn::init::kaiming_uniform_(at::Tensor, double, int, int)’
   99 |     FanModeType mode = torch::kFanIn,
      |     ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
In file included from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules.h:20,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:7,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:16,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/extension.h:5,
                 from /root/autodl-tmp/superpoint_transformer/src/dependencies/FRNN/frnn/csrc/bruteforce/bruteforce_cpu.cpp:2:
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h: In instantiation of ‘void torch::nn::ConvNdImpl<D, Derived>::reset_parameters() [with long unsigned int D = 3; Derived = torch::nn::Conv3dImpl]’:
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h:101:5:   required from ‘void torch::nn::ConvNdImpl<D, Derived>::reset() [with long unsigned int D = 3; Derived = torch::nn::Conv3dImpl]’
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h:33:8:   required from here
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h:105:27: error: cannot convert ‘const torch::enumtype::kFanIn’ to ‘int’
  105 |     init::kaiming_uniform_(
      |     ~~~~~~~~~~~~~~~~~~~~~~^
      |                           |
      |                           const torch::enumtype::kFanIn
  106 |         weight,
      |         ~~~~~~~            
  107 |         /*a=*/std::sqrt(5)); // NOLINT(cppcoreguidelines-avoid-magic-numbers)
      |         ~~~~~~~~~~~~~~~~~~~
In file included from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:5,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:16,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/extension.h:5,
                 from /root/autodl-tmp/superpoint_transformer/src/dependencies/FRNN/frnn/csrc/bruteforce/bruteforce_cpu.cpp:2:
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/init.h:99:17: note:   initializing argument 3 of ‘at::Tensor torch::nn::init::kaiming_uniform_(at::Tensor, double, int, int)’
   99 |     FanModeType mode = torch::kFanIn,
      |     ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
In file included from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules.h:20,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:7,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:16,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/extension.h:5,
                 from /root/autodl-tmp/superpoint_transformer/src/dependencies/FRNN/frnn/csrc/bruteforce/bruteforce_cpu.cpp:2:
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h: In instantiation of ‘void torch::nn::ConvNdImpl<D, Derived>::reset_parameters() [with long unsigned int D = 2; Derived = torch::nn::Conv2dImpl]’:
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h:101:5:   required from ‘void torch::nn::ConvNdImpl<D, Derived>::reset() [with long unsigned int D = 2; Derived = torch::nn::Conv2dImpl]’
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h:33:8:   required from here
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h:105:27: error: cannot convert ‘const torch::enumtype::kFanIn’ to ‘int’
  105 |     init::kaiming_uniform_(
      |     ~~~~~~~~~~~~~~~~~~~~~~^
      |                           |
      |                           const torch::enumtype::kFanIn
  106 |         weight,
      |         ~~~~~~~            
  107 |         /*a=*/std::sqrt(5)); // NOLINT(cppcoreguidelines-avoid-magic-numbers)
      |         ~~~~~~~~~~~~~~~~~~~
In file included from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:5,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:16,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/extension.h:5,
                 from /root/autodl-tmp/superpoint_transformer/src/dependencies/FRNN/frnn/csrc/bruteforce/bruteforce_cpu.cpp:2:
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/init.h:99:17: note:   initializing argument 3 of ‘at::Tensor torch::nn::init::kaiming_uniform_(at::Tensor, double, int, int)’
   99 |     FanModeType mode = torch::kFanIn,
      |     ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
In file included from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules.h:20,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:7,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:16,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/extension.h:5,
                 from /root/autodl-tmp/superpoint_transformer/src/dependencies/FRNN/frnn/csrc/bruteforce/bruteforce_cpu.cpp:2:
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h: In instantiation of ‘void torch::nn::ConvNdImpl<D, Derived>::reset_parameters() [with long unsigned int D = 1; Derived = torch::nn::Conv1dImpl]’:
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h:101:5:   required from ‘void torch::nn::ConvNdImpl<D, Derived>::reset() [with long unsigned int D = 1; Derived = torch::nn::Conv1dImpl]’
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h:33:8:   required from here
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/modules/conv.h:105:27: error: cannot convert ‘const torch::enumtype::kFanIn’ to ‘int’
  105 |     init::kaiming_uniform_(
      |     ~~~~~~~~~~~~~~~~~~~~~~^
      |                           |
      |                           const torch::enumtype::kFanIn
  106 |         weight,
      |         ~~~~~~~            
  107 |         /*a=*/std::sqrt(5)); // NOLINT(cppcoreguidelines-avoid-magic-numbers)
      |         ~~~~~~~~~~~~~~~~~~~
In file included from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:5,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:16,
                 from /root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/extension.h:5,
                 from /root/autodl-tmp/superpoint_transformer/src/dependencies/FRNN/frnn/csrc/bruteforce/bruteforce_cpu.cpp:2:
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/init.h:99:17: note:   initializing argument 3 of ‘at::Tensor torch::nn::init::kaiming_uniform_(at::Tensor, double, int, int)’
   99 |     FanModeType mode = torch::kFanIn,
      |     ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
error: command '/usr/bin/gcc' failed with exit code 1

About this issue

  • Original URL
  • State: open
  • Created 4 months ago
  • Comments: 17 (7 by maintainers)

Most upvoted comments

@better1593 It works when i change the PyG from version 2.5.0 to 2.3.0. Thanks for your replay!

Hi, @QingXia1994! I had the same problem when using torch_geometric 2.5.0. I resolved it by downgrading to version 2.3.0. You may want to check out my previous comment.

Hi Damien, I tried to change torch version 2.3.0 into 2.0.0 and it worked. But when I tried to run the command python src/train.py experiment=semantic/dales, I received a prompt saying I didn’t insatll the torch_geometric module. So, I directly installed torch_geometric using pip install torch_geometric and attempted to run python src/train.py experiment=semantic/dales again. This time, I encountered an error as follows:

Processing...
/root/miniconda3/envs/spt/lib/python3.8/site-packages/torch_geometric/deprecation.py:26: UserWarning: 'makedirs' is deprecated, use 'os.makedirs(path, exist_ok=True)' instead
  warnings.warn(out)

  0%|                                                                                                                                           | 0/261 [00:05<?, ?it/s]
[�[36m2024-03-02 12:56:24,511�[39m][�[34msrc.utils.utils�[39m][�[31mERROR�[39m] -
Traceback (most recent call last):
  ...
  File "/root/autodl-tmp/superpoint_transformer/src/data/data.py", line 150, in v_edge_keys
    return [k for k in self.keys if k.startswith('v_edge_')]
TypeError: 'method' object is not iterable

I suspect it’s due to the version of torch_geomtric, so I rolled back to version 2.3.0 and the problem was resolved.

I had the same issue with custom dataset and setting compile to False resolved the issue. Thanks @drprojects for the great work.

So right now, which versions do you have for the following libraries ?

Right now I‘m using:

  • torch 2.0.0
  • torch geometric 2.3.0
  • FRNN 0.0.0 And as you and @noisyneighbour said, I can train normally after I set compile=False.
  1. Torch compilation failed due to [Bug] #error C++17 or later compatible compiler is required to use PyTorch open-mmlab/mmdeploy#2529: To solve this in src/dependencies/FRNN I had to change extra_compile_args to: extra_compile_args = {“cxx”: [“-std=c++17”]}

Can you please share where you set extra_compile_args = {"cxx": ["-std=c++17"]} so that I can look into improving the installation process ?

Here is the line in FRNN setup.py

  1. torch_geometric was not installed, and “pip install torch_geometric” installed version 2.5.0, which caused the same error as shown above (TypeError: ‘method’ object is not iterable)

Which PyG version did you install in the end ? 2.3.0 ?

Yes, 2.3.0 was the version that worked for me. I used cuda 12.1 on an AWS EC2 G5 instance to build the environment.

I encountered the same issues as @better1593 when trying to set-up the environment using the v2 version of the code.

  1. Torch compilation failed due to this issue: To solve this in src/dependencies/FRNN I had to change extra_compile_args to: extra_compile_args = {"cxx": ["-std=c++17"]}
  2. torch_geometric was not installed, and “pip install torch_geometric” installed version 2.5.0, which caused the same error as shown above (TypeError: ‘method’ object is not iterable)
  3. Training using compile=True failed the dynamo error shown above. After setting compile=False, I am able to train normally.