iree: error: pattern listener tracker fail; transform dialect interpreter failed
What happened?
The transform-dialect-based lowering seems to be hitting an issue with this test case.
Run with no_attention2.mlir
from here
iree-compile --iree-hal-target-backends=cuda --iree-input-type=mhlo --iree-hal-cuda-llvm-target-arch=sm_80 no_attention2.mlir
Sorry for the large test case. The failing dispatch looks like this but for some reason isolating it into a small repro doesn’t reproduce the failure for me locally.
builtin.module {
func.func @_main_dispatch_342_generic_2048x2048_f32(%arg0: !flow.dispatch.tensor<readonly:tensor<2048x2048xf32>>, %arg1: !flow.dispatch.tensor<readonly:tensor<2048xf32>>, %arg2: !flow.dispatch.tensor<writeonly:tensor<2048xf32>>) {
%cst = arith.constant 0.000000e+00 : f32
%cst_0 = arith.constant 2.048000e+03 : f32
%0 = flow.dispatch.tensor.load %arg0, offsets = [0, 0], sizes = [2048, 2048], strides = [1, 1] : !flow.dispatch.tensor<readonly:tensor<2048x2048xf32>> -> tensor<2048x2048xf32>
%1 = flow.dispatch.tensor.load %arg1, offsets = [0], sizes = [2048], strides = [1] : !flow.dispatch.tensor<readonly:tensor<2048xf32>> -> tensor<2048xf32>
%2 = tensor.empty() : tensor<2048xf32>
%3 = linalg.fill ins(%cst : f32) outs(%2 : tensor<2048xf32>) -> tensor<2048xf32>
%4 = linalg.generic {indexing_maps = [#map4, #map8], iterator_types = ["parallel", "reduction"]} ins(%0 : tensor<2048x2048xf32>) outs(%3 : tensor<2048xf32>) {
^bb0(%in: f32, %out: f32):
%6 = arith.negf %in : f32
%7 = arith.addf %out, %6 : f32
linalg.yield %7 : f32
} -> tensor<2048xf32>
%5 = linalg.generic {indexing_maps = [#map, #map, #map, #map], iterator_types = ["parallel"]} ins(%4, %1, %3 : tensor<2048xf32>, tensor<2048xf32>, tensor<2048xf32>) outs(%2 : tensor<2048xf32>) {
^bb0(%in: f32, %in_1: f32, %in_2: f32, %out: f32):
%6 = arith.addf %in, %in_1 : f32
%7 = arith.divf %6, %cst_0 : f32
%8 = arith.addf %in_2, %7 : f32
linalg.yield %8 : f32
} -> tensor<2048xf32>
flow.dispatch.tensor.store %5, %arg2, offsets = [0], sizes = [2048], strides = [1] : tensor<2048xf32> -> !flow.dispatch.tensor<writeonly:tensor<2048xf32>>
return
}
}
Full error log link
Steps to reproduce your issue
See above
What component(s) does this issue relate to?
Compiler
Version information
iree.git @ ab37989652aed11f7f46498c09b9ac515c83eaa3
Additional context
No response
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 31 (25 by maintainers)
Commits related to this issue
- Update LLVM to https://github.com/llvm/llvm-project/commit/33da608ecc0fddbb38b01415d32464db1d867df1 * Update LLVM to https://github.com/llvm/llvm-project/commit/33da608ecc0fddbb38b01415d32464db1d86... — committed to iree-org/iree by vmurali a year ago
- Update LLVM to llvm/llvm-project@33da608 (#13666) * Update LLVM to llvm/llvm-project@33da608 * Cherry-picked llvm/llvm-project@aa90948 * Updated HLO to tensorflow/mlir-hlo@65eb2d4 (includes CMak... — committed to iree-org/iree by vmurali a year ago
- Update LLVM to llvm/llvm-project@33da608 (#13666) * Update LLVM to llvm/llvm-project@33da608 * Cherry-picked llvm/llvm-project@aa90948 * Updated HLO to tensorflow/mlir-hlo@65eb2d4 (includes CMak... — committed to NatashaKnk/iree by vmurali a year ago
- Update LLVM to llvm/llvm-project@33da608 (#13666) * Update LLVM to llvm/llvm-project@33da608 * Cherry-picked llvm/llvm-project@aa90948 * Updated HLO to tensorflow/mlir-hlo@65eb2d4 (includes CMak... — committed to plaidml/iree by vmurali a year ago
https://github.com/openxla/iree/tree/main/build_tools/scripts/integrate
That is probably fine (~1 day left in Europe for the week anyway)
Yes, I was able to reproduce that. It’s a real issue (see https://github.com/openxla/iree/issues/13419#issuecomment-1537142315) that needs an upstream fix and some design thinking. I suspect that the upstream behavior of ignoring the operation replaced with itself being motivated by what happens here when it is disabled. I’m OOO right now and will get back to this when I’m back next week.