core: Unifi Integration: Clients stuck Away
The problem
After upgrading past 105.5, my clients become randomly move to “away” state and are stuck there until the client actually disconnects, or I restart Home Assistant. I have tried every version up to 107.7. Some are worse than others, but the issue remains in all versions after 105.5.
I’m suspect of clients roaming/disconnecting and reconnecting very quickly as the source of the issue. I see several unfi controller log events where the client first shows disconnected then shows roamed to another access point, without a “connect” event. This behavior is also present while running 105.5 and does not appear to cause trouble.
Environment
| arch | x86_64 |
|---|---|
| dev | false |
| docker | true |
| hassio | false |
| os_name | Linux |
| os_version | 5.3.0-18-generic |
| python_version | 3.7.7 |
| version | 0.107.7 |
| virtualenv | false |
| Frontend version: 20200318.1 - latest |
Unifi controller: Current Version 5.12.66 (Build: atag_5.12.66_13102)
- Home Assistant release with the issue: 106.0 -> 107.7
- Last working Home Assistant release (if known): 105.5
- Operating environment (Hass.io/Docker/Windows/etc.): Docker
- Integration causing this issue: Unifi
- Link to integration documentation on our website: https://www.home-assistant.io/integrations/unifi
Problem-relevant configuration.yaml
Using the integration via the web page setup, no YAML config for Unifi.
Traceback/Error logs
Additional information
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 39 (32 by maintainers)
OK - I guess I’ve found the cause - in device_tracker there is a section of code that is trying to filter clients that are not in the filter SSID list. For some reason, on calls to is_connected, the filter is triggering sometimes.
So - I don’t have an in depth understanding of why/how the ESSID for the client is being cleared after a disconnect, but this piece of code is the cause of “short away” triggers for the clients.
@Kane610: any ideas why/how to fix it? if ( not self.is_wired and self.controller.option_ssid_filter and self.client.essid not in self.controller.option_ssid_filter ): LOGGER.debug(“Updating UniFi tracked device %s:%s filtered!”, self.entity_id, self.client.essid) return False
Resulting events: 2020-04-17 10:11:48 DEBUG (MainThread) [homeassistant.components.unifi.device_tracker] Updating UniFi tracked device device_tracker.mike_s_pixel disconnect scheduled 2020-04-17 10:14:13 DEBUG (MainThread) [homeassistant.components.unifi.device_tracker] Updating UniFi tracked device device_tracker.libbys_iphone: filtered! 2020-04-17 10:14:13 DEBUG (MainThread) [homeassistant.components.unifi.device_tracker] Updating UniFi tracked device device_tracker.libbys_iphone: filtered! 2020-04-17 10:14:18 DEBUG (MainThread) [homeassistant.components.unifi.device_tracker] Updating UniFi tracked device device_tracker.libbys_iphone disconnect scheduled 2020-04-17 10:14:43 DEBUG (MainThread) [homeassistant.components.unifi.device_tracker] Updating UniFi tracked device device_tracker.libbys_iphone disconnect scheduled 2020-04-17 10:14:46 DEBUG (MainThread) [homeassistant.components.unifi.device_tracker] Updating UniFi tracked device device_tracker.libbys_iphone: filtered!
@Kane610 just a quick feedback. I‘ve been running #33942 for three days as well as #34067 for the past 24 hours. No issues so far - device tracking seems to work perfectly again. No false marking when roaming or on non-unifi APs.
Fix is now a part of HASS dev branch. Don’t expect it to be included prior to 0.109. Please try it out
On more note: This is found in aiounifi/events.py WIRELESS_CLIENT_ROAM = “EVT_WC_Roam”
The log file shows it as: WIRELESS_CLIENT_ROAM = “EVT_WU_Roam”
I had also changed this in order to get the whole thing working more consistently.
@kylehendricks it does indeed look like the same issue mkmer mentions, clients roaming between APs. This will be fixed for 0.109
I changed the whole backend from polling all data to push over websocket. But there have been so many issues that I will revert to a polling implementation which has worked well for a long time