wgpu: [glsl-out] Incorrect rounding for max-float constant expression
In v0.12 (prior to gfx-rs/naga#2266), this wgsl constant
const f32max = 0x1.fffffep+127;
would inline in glsl as 3.4028234663852886e38
.
However, in v0.13 this results into a glsl constant
const float f32max = 3.4028235e38;
When testing a provided max-float value (read from a texture in my case) for >=f32max
this will now yield false
in a WebGL setup.
Interestingly, this scenario works fine on Metal where this yields constant float f32max = 340282350000000000000000000000000000000.0;
. I suspect that glsl applies different rounding here.
More backends and situations might be affected by this, but I was able to verify it only so far for glsl-out and rule it out for msl-out. It would be worth exploring the effect on other backends as well and find out if this has any other accuracy problems.
About this issue
- Original URL
- State: open
- Created a year ago
- Comments: 19 (19 by maintainers)
Commits related to this issue
- Workaround for https://github.com/gfx-rs/naga/issues/2436 — committed to rerun-io/rerun by Wumpf a year ago
- Update to wgpu 0.17 & latest egui (#2980) ### What * Depends on https://github.com/emilk/egui/pull/3253 * allows for a bit nicer callback handling on our side. Still not amazing but quite a bit ... — committed to rerun-io/rerun by Wumpf 10 months ago
- [glsl-out] Use 'as f64' when writing out f32 literals Fixes #2436. — committed to fornwall/naga by fornwall 10 months ago
- [glsl-out] Use 'as f64' when writing out f32 literals Fixes #2436. — committed to fornwall/naga by fornwall 10 months ago
This is surprising to me. The GLSL spec (3.20.6) says that a constant expression can be:
This seems to date all the way back to GLSL ES 1.0.
In a webgl2 shader, it seems like indeed
3.4028235e38 != 3.4028234663852886e38
withprecision highp float
: https://fornwall.net/webgl2-float-comparison (at least on all browser and os setups I’ve tested with)As said above, this is in contrast to treating the numbers are 32-bit floats, for instance in rust:
assert_eq!(3.4028235e38_f32, 3.4028234663852886e38_f32);
.So the effective behaviour is like the shader uses f64 in computations:
which causes
value_from_texture >= f32max
to be false.Using f64 literals would indeed solve this problem. Are there any potential drawbacks? Let’s create a PR to raise visibility and perhaps get more eyes on it (I can do it later today).