onnxruntime: [Web/js/webgpu] `error: cannot assign to value expression of type 'u32'`

Describe the issue

I modified the transformers.js demo site to test the webgpu backend and it just logged many warnings and errors. I’ve removed duplicates and sent them below:

Tint WGSL reader failure: :69:14 error: cannot assign to value expression of type 'u32'
      indices[2] -= sizeInConcatAxis[inputIndex - 1u];
             ^

:69:7 note: 'let' variables are immutable
      indices[2] -= sizeInConcatAxis[inputIndex - 1u];
      ^^^^^^^

:65:9 note: let 'indices' declared here
    let indices = o2i_output(global_idx);
        ^^^^^^^


 - While validating [ShaderModuleDescriptor]
 - While calling [Device].CreateShaderModule([ShaderModuleDescriptor]).
[Invalid ShaderModule] is invalid.
 - While validating compute stage ([Invalid ShaderModule], entryPoint: main).
 - While calling [Device].CreateComputePipeline([ComputePipelineDescriptor]).
[Invalid ComputePipeline] is invalid.
 - While Validating GetBindGroupLayout (0) on [Invalid ComputePipeline]
Tint WGSL reader failure: :74:14 error: cannot assign to value expression of type 'u32'
      indices[1] -= sizeInConcatAxis[inputIndex - 1u];
             ^

:74:7 note: 'let' variables are immutable
      indices[1] -= sizeInConcatAxis[inputIndex - 1u];
      ^^^^^^^

:70:9 note: let 'indices' declared here
    let indices = o2i_output(global_idx);
        ^^^^^^^


 - While validating [ShaderModuleDescriptor]
 - While calling [Device].CreateShaderModule([ShaderModuleDescriptor]).
An uncaught WebGPU validation error was raised: [Invalid BindGroupLayout] is invalid.
 - While validating [BindGroupDescriptor] against [Invalid BindGroupLayout]
 - While calling [Device].CreateBindGroup([BindGroupDescriptor]).
An uncaught WebGPU validation error was raised: [Invalid CommandBuffer] is invalid.
 - While calling [Queue].Submit([[Invalid CommandBuffer]])

The output is also incorrect: “Hello, how are you?” -> “Bonjour, comment :”, should be “Bonjour, comment êtes-vous?” (which actually is quite close, so, not bad!)

To reproduce

  1. Using yesterday’s CI build: https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=1104462&view=artifacts&pathAsName=false&type=publishedArtifacts

  2. You can replace the import in the demo site (source code available here), but it should be thrown when running any example app.

  3. Models tested: t5-small and whisper.tiny.en

Urgency

blocks webgpu release

ONNX Runtime Installation

Other / Unknown

ONNX Runtime Version or Commit ID

https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=1104462&view=artifacts&pathAsName=false&type=publishedArtifacts

Execution Provider

‘webgpu’ (WebGPU)

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 18 (13 by maintainers)

Most upvoted comments

Speaking of tensor OPs, i had to extend the class and add these https://github.com/dakenf/stable-diffusion-webgpu-minimal/blob/main/src/lib/Tensor.ts

yeap, can take a look. There was some larger change in the index helper which might have resulted in some regression. I can run a good amount of the transformers.js models automated … just running it to see if there is more.

I opened 2 new issues for perf and tensor object. We could close this one?

sorry, dropped the ball for a little. Back on this one today.

hm, I see both t5-small and whisper.tiny much faster but I test encoder and decoder separate and use fp32 models. Let me take a look.

Yes. It can be fixed if you change ‘let indices’ to ‘var indices’ in concat.ts (I forgot to make a pr for that)