watchtower: Dotnet core containers do not stop gracefully on synology
Describe the bug I’m following the directions to use env variables for pulling an image from a private repo, by using REPO_USER, REPO_PASS, because I tried config.json and didn’t get the credentials. watchtower recognises the credentials and pulls the new image successfully, it shuts down the container but then … PANIC happens… Then the container is DELETED.!!!
To Reproduce Steps to reproduce the behavior:
sudo docker run -d --name watchtower --restart always -v /var/run/docker.sock:/var/run/docker.sock --env REPO_USER=gtaranti --env REPO_PASS=my-secret-password v2tec/watchtower -i 60 --debug --cleanup fnttapi
Environment
- Platform : Synologu NAS
- Architecture : x64
- Docker version : 17.05.0-ce
Logs from running watchtower with the --debug option
time=“2019-08-02T11:47:59Z” level=debug msg=“Retrieving running containers”
time=“2019-08-02T11:47:59Z” level=info msg=“First run: 2019-08-02 11:48:59 +0000 UTC”
time=“2019-08-02T11:48:59Z” level=debug msg=“Checking containers for updated images”
time=“2019-08-02T11:48:59Z” level=debug msg=“Retrieving running containers”
time=“2019-08-02T11:48:59Z” level=debug msg=“Pulling gtaranti/fntapi:latest for /fnttapi”
time=“2019-08-02T11:48:59Z” level=debug msg=“Loaded auth credentials {gtaranti my-secret-password } for gtaranti/fntapi:latest”
time=“2019-08-02T11:49:50Z” level=info msg=“Found new gtaranti/fntapi:latest image (sha256:43d51995ad2288ae2a8034078922ea9b557ebb49b7324ba84e563be3263dd42c)”
time=“2019-08-02T11:49:50Z” level=info msg=“Stopping /fnttapi (e6eae80f10f36610e37e27b8c0720c6bb71e98bcdb374d486fe20a2cb9673614) with SIGTERM”
time=“2019-08-02T11:49:51Z” level=debug msg=“Removing container e6eae80f10f36610e37e27b8c0720c6bb71e98bcdb374d486fe20a2cb9673614”
2019/08/02 11:49:52 cron: panic running job: assignment to entry in nil map
goroutine 41 [running]:
github.com/v2tec/watchtower/vendor/github.com/robfig/cron.(*Cron).runWithRecovery.func1(0xc42000cf50)
/go/src/github.com/v2tec/watchtower/vendor/github.com/robfig/cron/cron.go:154 +0xbc
panic(0x894d00, 0xc4202bb140)
/usr/local/go/src/runtime/panic.go:458 +0x243
github.com/v2tec/watchtower/container.Container.runtimeConfig(0x1, 0xc4203f6ff0, 0xc4203bd180, 0x0)
/go/src/github.com/v2tec/watchtower/container/container.go:164 +0x3df
github.com/v2tec/watchtower/container.dockerClient.StartContainer(0xc420234d80, 0x1, 0xc4203f6f01, 0xc4203f6ff0, 0xc4203bd180, 0x0, 0x0)
/go/src/github.com/v2tec/watchtower/container/client.go:135 +0x6b
github.com/v2tec/watchtower/container.(*dockerClient).StartContainer(0xc4202bb770, 0xc4203ad701, 0xc4203f6ff0, 0xc4203bd180, 0xbd7440, 0x0)
<autogenerated>:19 +0x7f
github.com/v2tec/watchtower/actions.Update(0xb8c400, 0xc4202bb770, 0xc4200660b0, 0x1, 0x1, 0xc420010001, 0x685968, 0xc42001cf10)
/go/src/github.com/v2tec/watchtower/actions/update.go:91 +0x4fb
main.start.func1()
/go/src/github.com/v2tec/watchtower/main.go:212 +0x12c
github.com/v2tec/watchtower/vendor/github.com/robfig/cron.FuncJob.Run(0xc4203f6840)
/go/src/github.com/v2tec/watchtower/vendor/github.com/robfig/cron/cron.go:94 +0x19
github.com/v2tec/watchtower/vendor/github.com/robfig/cron.(*Cron).runWithRecovery(0xc42000cf50, 0xb857a0, 0xc4203f6840)
/go/src/github.com/v2tec/watchtower/vendor/github.com/robfig/cron/cron.go:158 +0x57
created by github.com/v2tec/watchtower/vendor/github.com/robfig/cron.(*Cron).run
/go/src/github.com/v2tec/watchtower/vendor/github.com/robfig/cron/cron.go:192 +0x48e
time=“2019-08-02T11:49:59Z” level=debug msg=“Checking containers for updated images”
time=“2019-08-02T11:49:59Z” level=debug msg=“Retrieving running containers”
time=“2019-08-02T11:49:59Z” level=debug msg=“Scheduled next run: 2019-08-02 11:50:59 +0000 UTC”
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 26 (12 by maintainers)
@gtaranti @piksel @simskij I found a solution to the problem. Thanks to @simskij for pointing me to the documentation, I’ve confirmed that labelling the image with the SIGHUP signal, gracefully shuts down and recreates the container.
@gtaranti This should solve your problem too with your container. Just add
LABEL com.centurylinklabs.watchtower.stop-signal="SIGHUP"to your Dockerfile.@simskij For me, this issue can be closed.
While I thank you for the idea @gtaranti , I agree with @JoachimVeulemans on this one. Changing the signal should be a per-container option one resorts to if nothing else works, not a default option for an entire stack.
@gtaranti I think this is not a good idea. Every container should be stopped as gracefully as possible. Only the .NET containers cause some trouble so they need a little bit more force. No need to stop other containers with the same level of force, this might corrupt the data in some instances when the container doesn’t end properly… And that is not what you want.
@simskij do you think otherwise? otherwise, problem solved, issue closed 😄
I presume you based your images on this guide?