wiserHomeAssistantPlatform: The unit of sensor.wiser_lts [...] (°C) cannot be converted to the unit of previously compiled statistics (None).

For a while now I’ve been getting these types of warnings in the HA log. Usually once every day or so.

The unit of sensor.wiser_lts_target_temperature_main_bedroom (°C) cannot be converted to the unit of previously compiled statistics (None). Generation of long term statistics will be suppressed unless the unit changes back to None or a compatible unit. Go to https://my.home-assistant.io/redirect/developer_statistics to fix this

I’m not sure if it’s always the same sensor or it keeps changing. I usually just follow the instructions and apply the “fix” as instructed. But that only solves the issue temporarily, until the warning shows up again. I’m pretty sure the unit keeps getting changed from None to C, and then back again. Which repeatedly triggers the warning even after it being fixed.

Which led me to believe there is an actual issue with the integration. It’s been happening for weeks if not months, over updates as well.

Integration v3.4.2. HA:

Core
2024.1.6
Supervisor
2023.12.1
Operating System
11.4
Frontend
20240104.0

HubR, with single channel, on a combi boiler without OpenTherm. Firm 3.14.0.

About this issue

  • Original URL
  • State: open
  • Created 5 months ago
  • Comments: 60

Most upvoted comments

See I almost knew that would happen!!! So, as schedules are very long, I thought that may chunk too but maybe not. Give it about 10 mins and change the same entry in manifest.json to 1.5.9 and try again.

If you still get these errors (or back to previous error) then, think I’ll have to rework some sort of retry routine as this is a very inconsistant issue (ie it works and then it doesnt and then it does and then it doesnt … etc etc etc)

EDIT: Good news however is it has got beyond where it was erroring before!

You can leave the changes as is and I will include them in the next release. I am leaving the issue open until it is released.

OK, I can see the issue happening in the log but, I can’t see an error logged either. This makes no sense!

Rather than me release yet another version, can you also modify an integration file for me to see if we can capture something.

In custom_components/wiser is a file coordinator.py

At the very bottom of this file, you will see

        except Exception as ex:  # pylint: disable=broad-except
            raise UpdateFailed(ex) from ex

can you change it to

        except Exception as ex:  # pylint: disable=broad-except
            _LOGGER.error(
                "Unknown error fetching wiser (%s) data. %s.  Please report this error to the integration owner",
                f"{DOMAIN}-{self.config_entry.data.get(CONF_NAME)}",
                ex,
            )

Make sure to keep the right indentation.

I think this may stop them going unavailable but log an error in the logs as to what has happened. I think maybe somehow related to these updates that don’t complete (either success or failure).

You will need to restart HA for the change to take effect.

So, spent more time looking at this and I don’t think it is that but a hub connection timeout.

I have included timeouts in the retry logic so if it does timeout, then it will try again 5 times. I have also increased the http timeout from 10s to 20s, so if your hub has dropped off wifi for a short time, it has 100s to come back on before it classes it as an update fail. I did notice with my mesh wifi though that after he hub has not been on wifi for 30s or so, it comes back within 3s with a connect fail (presuming dropped out of wifi device registry and gets a no route to host type error) and does not wait the full 20s. If yours does this then it may fail update much sooner.

I have also added some more debug logging in the api to see what is going on and how long each request is taking when successful.

And, I have updated the integration to stop entities going unavailable if the update fails. I’m not 100% sure about that as it seems to be standard HA functionality to do that if the update fails but I’m just too much of a crowd pleaser! It will still raise a warning in the logs as I think it should otherwise we are masking issues and it may not be easy to diagnose other symptoms.

This will all be in v3.4.6 which I will release later. And then that will be the last release for a couple of months, but let me know how you get on.

Ok. Hold off on resetting your hub for a bit. I have a theory about the timeouts not being a http timeout but the async job that runs the update being cancelled before update completes as it took too long and it is actually that timing out. With more robust retry method it could be exceeding some timeout.

So, they all match to a timeout but 1, so i focussed on that. If you look carefully, it never finished the update but no error of any kind (no attempt to get status and no update successdul with time taken). It then took another 5s before it went unavailable. Thats very odd - its like it just gave up. I wonder if it did timeout but the logging doesnt show this. I’ll have a look at that.

The question of why it times out so often but your wifi doesnt show it disconnected. Im wondering if it does disconnect for a short time and your wifi doesnt show if short. On your hub wifi signal sensor, what does your uptime look like?

The json decode errors are fine. I am initially querying the hub on http1.0, which when the hub is not honoring this request properly and sends you a partial chunk, you will see it retries on http1.1 and is successful. It wil do this up to 5 times before it fails and it never does, apart from just timeouts.

Ok, thats great but dont understand why all your sensors are going unavailable. Would have understood as updates were failing but now they are not, that makes no sense. I’ll release this update as it has a lot of other fixes too and lets see how we get on.

I have noticed on mine that the performance of some actions are slower now and this enhancement may need some more work to improve that but think we have the foundation to work with. Thanks for testing.

Right, I think we have got to the bottom of the issue - whether we have got the fix is another question! 😃

So, it seems your and @dpgh947 issues are caused by your hubs sending the response chunked. Previous work to fix the issues with changes in aiohttp, forced use of http/1.0 which does not support chunked responses. As such, you only get a partial response which is not valid json (and also not enough to be of use anyway).

So the answer is to try and use http/1.1 (which supports chunking) and see if that fixes it out of the box or it needs something more to get all these chunks and put them together.

My hub is not doing this (seems only a very few are), so would appreciate if you can try something for me to test if we have fixed it or need to do more work.

In your config/custom_components/wiser directory you have a manifest.json file. In there, you will see a reference to aiowiserheatapi with a version of 1.5.7. Can you change this to 1.5.8, save the file and restart HA. Then tell me if you are still getting these errors or not.

If this fixes it, I will release this update to everybody as it may be affecting more people than are reporting it.

Thanks

OK good (he says!) that’s what I wanted to capture if it happened again. I’ll look at this data and see what’s wrong and if we can overcome it.

OK, I this is clearly your hub sending some erroneous data, which is the root cause of these issues. Need to think how we capture this to see what it is or whether I can check the data better and accommodate it. Give me a few days.