intel-extension-for-pytorch: Failed to compile from sources

Enviroment:

  • System: Ubuntun-20.04
  • gcc: 9.3.0
  • python: 3.8
  • python: 1.9.1+cpu
  • ipex: master

I followed the README instructions to compile the ipex sources but failed. I also tried to build ipex with torch-1.10+cpu, but failed too. The error message is as follows:


/root/downloads/intel-extension-for-pytorch/torch_ipex/csrc/cpu/BatchNorm.cpp: In function ‘at::Tensor torch_ipex::autocast::frozen_batch_norm(const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&)’:
/root/downloads/intel-extension-for-pytorch/torch_ipex/csrc/cpu/BatchNorm.cpp:326:66: error: ‘AutocastCPU’ is not a member of ‘c10::DispatchKey’; did you mean ‘AutocastCUDA’?
  326 |   c10::impl::ExcludeDispatchKeyGuard no_autocastCPU(DispatchKey::AutocastCPU);
      |                                                                  ^~~~~~~~~~~
      |                                                                  AutocastCUDA
In file included from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/jit/runtime/operator.h:13,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/jit/ir/ir.h:7,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/jit/api/method.h:5,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/jit/api/object.h:6,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/jit/frontend/tracer.h:9,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/autograd/generated/variable_factories.h:12,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/types.h:7,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /root/anaconda3/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
                 from /root/downloads/intel-extension-for-pytorch/torch_ipex/csrc/cpu/BatchNorm.cpp:1:
/root/downloads/intel-extension-for-pytorch/torch_ipex/csrc/cpu/BatchNorm.cpp: At global scope:
/root/downloads/intel-extension-for-pytorch/torch_ipex/csrc/cpu/BatchNorm.cpp:336:32: error: ‘AutocastCPU’ is not a member of ‘c10::DispatchKey’; did you mean ‘AutocastCUDA’?

For pytorch-1.10, the error message is

root/downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp: In function ‘void torch_ipex::autocast::TORCH_LIBRARY_IMPL_init_aten_AutocastCPU_84(torch::Library&)’:
/root/downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:164:72: error: could not convert template argument ‘& at::linalg_matrix_rank’ from ‘<unresolved overloaded function type>’ to ‘at::Tensor (*)(const at::Tensor&, double, bool)’
  164 |         &CPU_WrapFunction<DtypeCastPolicy::CAST_POLICY, SIG, SIG, &FUNC>:: \
      |                                                                        ^
/root/downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:445:1: note: in expansion of macro ‘MAKE_REGISTER_FUNC’
  445 | MAKE_REGISTER_FUNC(
      | ^~~~~~~~~~~~~~~~~~
/root/downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:165:13: error: ‘<expression error>::type’ has not been declared
  165 |             type::call);                                                   \
      |             ^~~~
/root/downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:165:13: note: in definition of macro ‘MAKE_REGISTER_FUNC’
  165 |             type::call);                                                   \
      |             ^~~~
/root/downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp: At global scope:
/root/downloads/intel-extension-for-pytorch/torch_ipex/csrc/autocast_mode.cpp:168:15: error: template-id ‘get_op_name<at::Tensor(const at::Tensor&, double, bool), at::linalg_matrix_rank>’ for ‘std::string torch_ipex::autocast::get_op_name()’ does not match any template declaration


About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 24 (12 by maintainers)

Commits related to this issue

Most upvoted comments

Hi @morgan-bc , I created a branch for IPEX 1.10 release(release/1.10), it is on top of PyTorch 1.10. So you can install PyTorch 1.10 and then build IPEX 1.10 from the source to fix the error. Back to the error, the root cause is PyTorch master changed some operators’ signatures.