frigate: 0.11.0 release - FFMPEG hardware accelleration not working inside docker container on i7-11th gen RocketLake
Describe the problem you are having
Hardware acceleration works on the host, just not inside the container in the new release 0.11.0.
If I roll back to v10, all is fine.
I know the FFMPEG build has now changed, and this HWACCEL issue is a bad nightmare for many projects, ever since the thread bugs to intel’s split of i965 vs iHD etc.
I’ve spent a few hours on this, just can’t seem to get this working, so gone back to v10 for now.
The error I get when attempting hardware acceleration inside frigate container via the hwaccel_args of ‘-hwaccel auto’:
[h264 @ 0x55c00073de80] Failed to end picture decode issue: 23 (internal decoding error). [h264 @ 0x55c00073de80] hardware accelerator failed to decode picture Error while decoding stream #0:0: Input/output error
The error I get when attempting accel inside frigate container via the hwaccel_args of ‘-hwaccel_output_format qsv -c:v h264_qsv -hwaccel_device /dev/dri/renderD128’:
[h264_qsv @ 0x55fe09463f40] Error during QSV decoding.: device failed (-17) Error while decoding stream #0:0: Input/output error
The output of vainfo inside the container
error: XDG_RUNTIME_DIR not set in the environment.
error: can't connect to X server!
libva info: VA-API version 1.10.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_10
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.10 (libva 2.10.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 21.1.1 ()
vainfo: Supported profile and entrypoints
VAProfileNone : VAEntrypointVideoProc
VAProfileNone : VAEntrypointStats
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Simple : VAEntrypointEncSlice
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointFEI
VAProfileH264Main : VAEntrypointEncSliceLP
VAProfileH264High : VAEntrypointVLD
VAProfileH264High : VAEntrypointEncSlice
VAProfileH264High : VAEntrypointFEI
VAProfileH264High : VAEntrypointEncSliceLP
VAProfileVC1Simple : VAEntrypointVLD
VAProfileVC1Main : VAEntrypointVLD
VAProfileVC1Advanced : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointEncPicture
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
VAProfileH264ConstrainedBaseline: VAEntrypointFEI
VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
VAProfileHEVCMain : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointEncSlice
VAProfileHEVCMain : VAEntrypointFEI
VAProfileHEVCMain : VAEntrypointEncSliceLP
VAProfileHEVCMain10 : VAEntrypointVLD
VAProfileHEVCMain10 : VAEntrypointEncSlice
VAProfileHEVCMain10 : VAEntrypointEncSliceLP
VAProfileVP9Profile0 : VAEntrypointVLD
VAProfileVP9Profile1 : VAEntrypointVLD
VAProfileVP9Profile2 : VAEntrypointVLD
VAProfileVP9Profile3 : VAEntrypointVLD
VAProfileHEVCMain12 : VAEntrypointVLD
VAProfileHEVCMain12 : VAEntrypointEncSlice
VAProfileHEVCMain422_10 : VAEntrypointVLD
VAProfileHEVCMain422_10 : VAEntrypointEncSlice
VAProfileHEVCMain422_12 : VAEntrypointVLD
VAProfileHEVCMain422_12 : VAEntrypointEncSlice
VAProfileHEVCMain444 : VAEntrypointVLD
VAProfileHEVCMain444 : VAEntrypointEncSliceLP
VAProfileHEVCMain444_10 : VAEntrypointVLD
VAProfileHEVCMain444_10 : VAEntrypointEncSliceLP
VAProfileHEVCMain444_12 : VAEntrypointVLD
VAProfileHEVCSccMain : VAEntrypointVLD
VAProfileHEVCSccMain : VAEntrypointEncSliceLP
VAProfileHEVCSccMain10 : VAEntrypointVLD
VAProfileHEVCSccMain10 : VAEntrypointEncSliceLP
VAProfileHEVCSccMain444 : VAEntrypointVLD
VAProfileHEVCSccMain444 : VAEntrypointEncSliceLP
VAProfileAV1Profile0 : VAEntrypointVLD
VAProfileHEVCSccMain444_10 : VAEntrypointVLD
VAProfileHEVCSccMain444_10 : VAEntrypointEncSliceLP
The output of vainfo on the host
error: can't connect to X server!
libva info: VA-API version 1.14.0
libva info: Trying to open /usr/lib64/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_14
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.14 (libva 2.14.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 22.4.3 (74f40ee)
vainfo: Supported profile and entrypoints
VAProfileNone : VAEntrypointVideoProc
VAProfileNone : VAEntrypointStats
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Simple : VAEntrypointEncSlice
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointFEI
VAProfileH264Main : VAEntrypointEncSliceLP
VAProfileH264High : VAEntrypointVLD
VAProfileH264High : VAEntrypointEncSlice
VAProfileH264High : VAEntrypointFEI
VAProfileH264High : VAEntrypointEncSliceLP
VAProfileVC1Simple : VAEntrypointVLD
VAProfileVC1Main : VAEntrypointVLD
VAProfileVC1Advanced : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointEncPicture
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
VAProfileH264ConstrainedBaseline: VAEntrypointFEI
VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
VAProfileHEVCMain : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointEncSlice
VAProfileHEVCMain : VAEntrypointFEI
VAProfileHEVCMain : VAEntrypointEncSliceLP
VAProfileHEVCMain10 : VAEntrypointVLD
VAProfileHEVCMain10 : VAEntrypointEncSlice
VAProfileHEVCMain10 : VAEntrypointEncSliceLP
VAProfileVP9Profile0 : VAEntrypointVLD
VAProfileVP9Profile1 : VAEntrypointVLD
VAProfileVP9Profile2 : VAEntrypointVLD
VAProfileVP9Profile3 : VAEntrypointVLD
VAProfileHEVCMain12 : VAEntrypointVLD
VAProfileHEVCMain12 : VAEntrypointEncSlice
VAProfileHEVCMain422_10 : VAEntrypointVLD
VAProfileHEVCMain422_10 : VAEntrypointEncSlice
VAProfileHEVCMain422_12 : VAEntrypointVLD
VAProfileHEVCMain422_12 : VAEntrypointEncSlice
VAProfileHEVCMain444 : VAEntrypointVLD
VAProfileHEVCMain444 : VAEntrypointEncSliceLP
VAProfileHEVCMain444_10 : VAEntrypointVLD
VAProfileHEVCMain444_10 : VAEntrypointEncSliceLP
VAProfileHEVCMain444_12 : VAEntrypointVLD
VAProfileHEVCSccMain : VAEntrypointVLD
VAProfileHEVCSccMain : VAEntrypointEncSliceLP
VAProfileHEVCSccMain10 : VAEntrypointVLD
VAProfileHEVCSccMain10 : VAEntrypointEncSliceLP
VAProfileHEVCSccMain444 : VAEntrypointVLD
VAProfileHEVCSccMain444 : VAEntrypointEncSliceLP
VAProfileAV1Profile0 : VAEntrypointVLD
VAProfileHEVCSccMain444_10 : VAEntrypointVLD
VAProfileHEVCSccMain444_10 : VAEntrypointEncSliceLP
inte_gpu_top works fine and shows the card as active inside the container, and when running ffmpeg with accel on the host, the gpu_top tool shows the appropriate hardware acceleration activity.
It seems like the FFMPEG build included in the 0.11.0 release of Frigate doesn’t want to use Intel’s Media iHD driver for hardware acceleration, though this is just a guess…
Version
0.11.0
Frigate config file
hwaccel_args: -hwaccel_output_format qsv -c:v h264_qsv -qsv_device /dev/dri/renderD128
Relevant log output
FFMPEG hardware acceleration failures.
FFprobe output from your camera
n/a
Frigate stats
n/a
Operating system
Other Linux
Install method
Docker Compose
Coral version
PCIe
Network connection
Wired
Camera make and model
n/a
Any other information that may be helpful
No response
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 1
- Comments: 122 (3 by maintainers)
I have this exact same issue on intel gen12 CPU
Thanks @NickM-27 - driver fix resolves hardware issues on newer Alder Lake NUCs.
Looking into using a newer driver. Jellyfin was used in RC1 and it had good results even with hosts that had older drivers so 🤞newer driver will mean better backwards compatibility with hosts that have older drivers installed.
Unfortunately I have no Intel devices to test, but my AMD APU is working fine with the newer driver (but it also never had issues even though my Unraids driver was newer than what is currently included). Seems AMD is less picky than Intel in this case.
Definitely don’t want this to be an issue long term, but nice that there is a manual intermediary fix.
Just wanted to say I also have this issue with a i7 11700K running on unRAID 6.10.2.
0.11 r1 worked but everything since fails using “hwaccel_args: -c:v h264_qsv”.
I tried " hwaccel_args: -hwaccel qsv -qsv_device - /dev/dri/renderD128" and it doesn’t fail however my video load on my cpu is 0%.
I did the following
Now using “hwaccel_args: -c:v h264_qsv” everything works as expected using the stable version of 0.11.0 and I see video load on my cpu.
new-intel-driver
yielded the same error as posted in 3941#issuecomment-1311092042, buttesting-driver
works fine for me.I’ll give this a go on my 12th gen intel setup this evening and feed back.
The take I have from this all is that if your host distro is the same as the container distro, and all are up-to-date, it should work as long as driver versions match.
So do a simple vainfo inside the container shell, and also a vainfo on the host and you should see the iHD drivers detected and their versions match.
However if the host driver is upgraded, or if the host distro is using a different version driver (iHD), Intel hardware-acceleration for the newer ‘lake’ cpus fails. Intel split the drivers up a while back, the old i965 is for older cpus, the new iHD is for newer cpus.
Also as has been noted a few times by the very helpful @NickM-27 , sometimes you may not even be hardware-accelerating, as FFMPEG will just resort to CPU for decoding, with only a handful of cameras you won’t notice it on the newer gen chips. This is where as suggested numerous times, start up an instance of ‘intel_gpu_top’ to confirm the ‘VIDEO’ engine is actually showing a percentage equivalent to the decode load.
Actually changing to ;
ffmpeg: hwaccel_args: -c:v h264_qsv
Fixed for me, I don’t think I was passing the device to the container before so it was never doing accelleration anyway, so once I fixed that and changed the above I’ve re-upgraded and its working…
intel-gpu-top: Intel Tigerlake (Gen12) @ /dev/dri/card0 - 75/ 74 MHz 0% RC6; 98 irqs/s
Same issue on a v11 NUC, any way to implement this work-around or should I roll back to v10?