wslg: GPU acceleration not working with GUI applications in WSLg

Environment

Windows build number: Windows 11 (build 22000.194)
Your Distribution version: Ubuntu 20.04
Your WSLg version: 1.0.26

I’ve also installed the latest Nvidia graphics drivers (510.06 found here). Looking at uname -a gives the following Linux kernel version:

5.10.43.3-microsoft-standard-WSL2 #1 SMP Wed Jun 16 23:47:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

I can also see a listing for /dev/dxg, and when checking glxinfo | grep OpenGL , I can see that I have the Mesa 21 drivers and my MX 150 GPU is being recognized:

OpenGL vendor string: Microsoft Corporation
OpenGL renderer string: D3D12 (NVIDIA GeForce MX150)
OpenGL core profile version string: 3.3 (Core Profile) Mesa 21.0.3
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.1 Mesa 21.0.3
OpenGL shading language version string: 1.40
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 21.0.3
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
OpenGL ES profile extensions:

Also, looking at /usr/lib/x86_64-linux-gnu/dri I can see the d3d12_dri.so shared lib:

dri

Steps to reproduce

Run glxgears in Ubuntu terminal.

Expected behavior

I expect to see GPU usage in nvtop.

Actual behavior

When looking at the output from nvtop I see that nothing is being processed/rendered by my GPU:

nvtop

And looking at htop I see the following:

htop

So it looks like everything is still being done by the CPU. As such, I was wondering if there is anything that I’m missing on my side, as I’d love to have WSLg working correctly to make smooth GUI rendering possible.

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Reactions: 2
  • Comments: 19 (4 by maintainers)

Most upvoted comments

Okay this is funny:

amir@DESKTOP-E5QUC34:~$ LIBGL_ALWAYS_SOFTWARE=1 glxgears
5769 frames in 5.0 seconds = 1153.781 FPS
5935 frames in 5.0 seconds = 1186.876 FPS
5893 frames in 5.0 seconds = 1178.485 FPS
amir@DESKTOP-E5QUC34:~$ glxgears
873 frames in 5.0 seconds = 174.479 FPS
808 frames in 5.0 seconds = 161.454 FPS
634 frames in 5.0 seconds = 126.783 FPS

Hi @JeffR1992, the lower frame rate with GPU is due to the read back from GPU memory to CPU memory for sharing graphics image between Linux and Windows host over current remote desktop protocol. Since rendering by glxgears is very simple, and even with software rendering, thus the overhead of that copy appears significant with GPU compared to full CPU rendering stack. For more complex 3D rendering, the benefit of GPU rendering goes beyond the overhead of the copy, so it can have better frame with GPU. We have a plan to share Linux container’s GPU memory directly with Windows host with zero copy in future release. Thanks!

LIBGL_ALWAYS_SOFTWARE=1 glxgears = 1300-ish FPS

glxgears = 100-ish FPS

Definitely something is messed up. 🏆

This is Intel UHD Graphics 630. There’s a dedicated Nvidia GeForce GTX 1650 which “shouldn’t” be in use, but who really knows, right?

On some laptops, e.g. Lenovo T15g, when an external monitor is connected the NVIDIA adapter is activated. You have to assign nvidia to the environment variable MESA_D3D12_DEFAULT_ADAPTER_NAME to force OpenGL to use the nvidia graphics card. See GPU selection in WSLg.

export MESA_D3D12_DEFAULT_ADAPTER_NAME=nvidia
glxgears

@westonNexben, actually performance hit could be more significant in smaller demo since cost of copy is fixed based on size of window while 3D rendering in simple demo doesn’t have huge benefit with GPU (I mean software rendering still can have decent performance), and unfortunately even on unified memory GPU, it still incurs certain cost in copying from GPU render target to liner system memory. Current copy method involves mapping of GPU memory to CPU which involve un-swizzling on most of GPU and flushing of GPU pipeline, and copy is done by CPU (see https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/src/gallium/drivers/d3d12/d3d12_screen.cpp#L745) and it copies entire buffer without tracking of dirty regions (from previous frame). While there might be a way to improve performance of this copy path, but our goal is sharing GPU memory directly with Windows host eventually. Please refer this comment, https://github.com/microsoft/wslg/issues/387#issuecomment-887731973, thanks!

So essentially the test case on nVidia 3080, with mpv… it cannot quite push 120 FPS at 1080p, and 28 FPS at 2160p. That’s rather unusable.

(Windows 10: WSL version: 1.0.3.0 Kernel version: 5.15.79.1 WSLg version: 1.0.47 MSRDC version: 1.2.3575 Direct3D version: 1.606.4 DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp Windows version: 10.0.19045.2604

nVidia 3080 w/ driver 528.24)

Frame timings are also pretty wacky. mpv complains about an indirect context to boot, but it is allegedly accelerated. (Software rendering cannot handle the shader so one gets a black window for the video.)

Okay this is funny:

amir@DESKTOP-E5QUC34:~$ LIBGL_ALWAYS_SOFTWARE=1 glxgears
5769 frames in 5.0 seconds = 1153.781 FPS
5935 frames in 5.0 seconds = 1186.876 FPS
5893 frames in 5.0 seconds = 1178.485 FPS
amir@DESKTOP-E5QUC34:~$ glxgears
873 frames in 5.0 seconds = 174.479 FPS
808 frames in 5.0 seconds = 161.454 FPS
634 frames in 5.0 seconds = 126.783 FPS

I can confirm this! It happens in my WSL2 instance too! This happens not only with the glxgears but with a GUI program I need to run called xcrysden. If I open it normally it is so sluggish that cannot be used. By using the LIBGL_ALWAYS_SOFTWARE=1 option, I get back the expected performance.

The only time it worked and I didn’t need to do anything, it was when I installed windows 11 and the default nvidia driver that windows detected. Not the driver from the laptop manufacturer (Asus ROG Zephyrus) or the nvidia site. This lasted until the next windows update when the driver was also updated and messed up the WSL2.

I think there is a problem with the video driver that is not being investigated. Sadly I lack the knowledge to do to so myself. I only observe it.

@JeffR1992, I talked with NVIDIA engineer on this nvtop behavior and they can observe the same issue and will look into. And to disable GPU, you can set LIBGL_ALWAYS_SOFTWARE to 1. See https://github.com/microsoft/wslg/issues/445#issuecomment-917244282 for details. Thanks!