unifi-protect-backup: Interrupted event not being backed up

I did notice one peculiarity.

To upgrade I (accidentally) stopped the previous version in the middle of it backing up an event, and then upgraded and restarted (in debug to make sure the db was connecting OK). You can see the sequence here:

Dec  5 09:32:31 emp75 unifi_protect: 2022-12-05 09:32:31 [INFO]:unifi_protect_backup.unifi_protect_backup:#011Backing up event: 638d2dfc0139bf03e4034612
Dec  5 09:32:42 emp75 unifi_protect: 2022-12-05 09:32:42 [INFO]:unifi_protect_backup.unifi_protect_backup:#011Backed up successfully!
Dec  5 09:33:19 emp75 unifi_protect: 2022-12-05 09:33:19 [INFO]:unifi_protect_backup.unifi_protect_backup:#011Backing up event: 638d2e03031dbf03e403461b
Dec  5 09:35:00 emp75 unifi_protect: 2022-12-05 09:35:00 [#033[36;1m   DEBUG   #033[0m] unifi_protect_backup.unifi_protect_backup  :#011Config:
Dec  5 09:35:00 emp75 unifi_protect: 2022-12-05 09:35:00 [#033[36;1m   DEBUG   #033[0m] unifi_protect_backup.unifi_protect_backup  :#011  address='192.168.1.18'

The event that was backing up (638d2e03031dbf03e403461b) when the service was stopped has not been uploaded to OneDrive, and is not being picked up as ‘missed’.

I’ve not tested to see if any subsequent restarts in the middle of an action result in the same miss.

_Originally posted by @Swallowtail23 in https://github.com/ep1cman/unifi-protect-backup/issues/50#issuecomment-1336558749_

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 36 (13 by maintainers)

Most upvoted comments

good catch, fixed

v0.8.3 is now running through the CI and should address all the issues discussed in this thread once its published

Am I correct in understanding that with the change in buffer size, you are now able to backup all the events you expected for the full retention or is there still an underlying issue when using the container?

To answer your questions

  1. It’s just however many “v” you need for your desired logging level. So if you want to disable it stop need: -e VERBOSITY=""

  2. I would consider the solution to this out of scope for this application, since it exists for all containers but I do understand the concern. A quick google suggests you can limit podman’s log size like so: https://stackoverflow.com/questions/64411977/how-to-control-podman-container-log-behaviour-ctr-log

  3. The downloads always go to an in memory buffer and are then passed to rclone via stdin and it’s rcat command. I didn’t want this application creating a tonne of disk wear so unless rclone’s destination is set to a disk, nothing will ever be written to disk. Hence not wanting to make the download buffer too big.

For multiple destinations please follow #56, it was one of the main reasons for the v0.8 rewrite. While the feature hasn’t been added yet the application is now architected to support it in an upcoming release. It will require some thinking about how to setup the config for this situation. I’m thinking of adding an alternative way to configure the application via YAML which would then allow much more complex rules for destination while still keeping the existing CLI interface as an option

Thanks for sharing, there is definitely something borked there. I will look into it ASAP