core: Glances integration not working with 2023.5.0
The problem
I’m getting the following error with the glances integration since upgrading to 2023.5.0
Config entry 'oakhurst-backup-server' for glances integration not ready yet: 'usage'; Retrying in background
What version of Home Assistant Core has the issue?
core-2023.5.0
What was the last working version of Home Assistant Core?
core.2023.5.5
What type of installation are you running?
Home Assistant Container
Integration causing the issue
Glances
Link to integration documentation on our website
https://www.home-assistant.io/integrations/glances/
Diagnostics information
No response
Example YAML snippet
No response
Anything in the logs that might be useful for us?
Related Log Messages:
Config entry 'oakhurst-backup-server' for glances integration not ready yet: 'usage'; Retrying in background
Unexpected error fetching glances - 100.73.226.95 data: 'usage'
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/homeassistant/helpers/update_coordinator.py", line 258, in _async_refresh
self.data = await self._async_update_data()
File "/usr/lib/python3.10/site-packages/homeassistant/components/glances/coordinator.py", line 39, in _async_update_data
return await self.api.get_ha_sensor_data()
File "/usr/lib/python3.10/site-packages/glances_api/__init__.py", line 149, in get_ha_sensor_data
mem_use += container["memory"]["usage"]
KeyError: 'usage'
### Additional information
_No response_
About this issue
- Original URL
- State: closed
- Created a year ago
- Reactions: 2
- Comments: 39 (9 by maintainers)
Commits related to this issue
- Update manifest.json Update API version to fix #92455 — committed to freeDom-/core by freeDom- a year ago
- Update manifest.json Update API version to fix #92455 — committed to freeDom-/core by freeDom- a year ago
- Update manifest.json Update API version to fix #92455 — committed to freeDom-/core by freeDom- a year ago
I was having a look into this issue. It is not an issue with the docker container, but an issue with the integration when creating sensors for docker containers memory usage, which is not exposed by glances. If the information is not available an error handling is required here.
A workaround can be to remove the container information totally from glances (either using the stated docker container or removing it by running glances with the following command:
glances -w --disable-webui --disable-plugin dockerIf I have time later I might have a look at the integration code to implement the error handling.
@apmillen @robinostlund I was having a similar issue runining glances in a docker container on a rpi4 with
nicolargo/glances:alpine-latest-full. It turned out that the cgroup memory controller was disabled on the rpi4 and glances was not reporting any memory data.More infomation can be found here.
I had to add
cgroup_enable=memory cgroup_memory=1to/boot/cmdline.txtand reboot the pi. Glances then started reporting memory data. Then reload the glances integration on Home Assistant.Might be of some help to you.
First PR is merged, but I did not have time to look at the other issue related to the raid plugin yet. Unfortunately I am also not using the raid plugin and am unable to reproduce and test it. Need to setup a test environment first.
Looks like that API needs even more polishing… When I find the time I might also have a look into this issue and create another PR. So that a fix might find its way into one of the next releases…
My Python isn’t the best but it looks like this commit: 8cbe394 (2 Months ago) “Use get_ha_sensor_data method to update glances sensors (https://github.com/home-assistant/core/pull/83983)” Changes the way that HA pulls the data from the return of python-glances-api perhaps it wasn’t constructed in a way that allows for the extra step of enumerating the arrays
@anomandarisdragnipurake That was the final piece of the puzzle!
Added
cgroup_enable=memory cgroup_memory=1to the end of the line in/boot/cmdline.txt.Rebooted Pi and reloaded the Glances integration and all is working again.
Note as well that as @robinostlund said - you don’t need the
/proc/meminfoline either in the docker container.Another note is that both the
alpine-latest-fullandlatest-fulldocker containers work.Thank you all for you help. Hopefully this will help some others with their broken integrations.