nginx-proxy-manager: 2.10.0 unable to start on clean install
Checklist
- Have you pulled and found the error with
jc21/nginx-proxy-manager:latestdocker image?- Yes ~/ No~
- Are you sure you’re not using someone else’s docker image?
- Yes ~/ No~
- Have you searched for similar issues (both open and closed)?
- Yes ~/ No~
Describe the bug
The :latest and 2.10.0 image fails to start either with an existing configuration, or with a clean install.
Nginx Proxy Manager Version
2.10.0
To Reproduce Steps to reproduce the behavior:
- Start a container
- Watch it fail
Expected behavior
The container should start
Screenshots
➜ lb-pi003 docker compose up -d && docker compose logs -f app
[+] Running 3/3
⠿ Network lb-pi003_default Created 0.8s
⠿ Container lb-pi003-db-1 Started 27.7s
⠿ Container lb-pi003-app-1 Started 18.7s
lb-pi003-app-1 | s6-rc: info: service s6rc-oneshot-runner: starting
lb-pi003-app-1 | s6-rc: info: service s6rc-oneshot-runner successfully started
lb-pi003-app-1 | s6-rc: info: service fix-attrs: starting
lb-pi003-app-1 | s6-rc: info: service fix-attrs successfully started
lb-pi003-app-1 | s6-rc: info: service legacy-cont-init: starting
lb-pi003-app-1 | s6-rc: info: service legacy-cont-init successfully started
lb-pi003-app-1 | s6-rc: info: service prepare: starting
lb-pi003-app-1 | ❯ Configuring npmuser ...
lb-pi003-app-1 | id: 'npmuser': no such user
lb-pi003-app-1 | ❯ Checking paths ...
lb-pi003-app-1 | ❯ Setting ownership ...
lb-pi003-app-1 | s6-rc: fatal: timed out
lb-pi003-app-1 | s6-sudoc: fatal: unable to get exit status from server: Operation timed out
lb-pi003-app-1 | /run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
Operating System
Rpi
Additional context
About this issue
- Original URL
- State: open
- Created a year ago
- Reactions: 26
- Comments: 135 (19 by maintainers)
Commits related to this issue
- - attempt to workaround npmuser error (https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2753) — committed to deftdawg/homeassistant-addons by deftdawg a year ago
- - Add Bunny.net DNS patched version of nginx proxy manager HA add-on. Includes workarounds for npm user error (https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2753) — committed to deftdawg/homeassistant-addons by deftdawg a year ago
Same issue here. Ubuntu 22.04 LTS (docker). Confirmed fix on rollback to 2.9.22
I can confirm that this is exactly the same issue I’ve been seeing from my side as well. Also running TrueNAS Scale, broken since
2.10.x.Can confirm, I have the same issue in TrueNAS Scale with the container getting stuck at
chown -R xx:xx /opt/certbot, exactly as in https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2753#issuecomment-1556126836. Very annoyingI ended up mounting
/etc/s6-overlay/s6-rc.d/prepare/30-ownership.shto a script I created on the host, with all of the same contents, but with the modified chown line that causes the issue (just let it run in the background):When I do a portainter recreate including “re-pull image”, I’m getting the error:
I’m running on
jc21/nginx-proxy-manager:2Back to 2.9.22 “solves” the problem for now 😃
Hello, I have a strange issue. My npm stuck at setting ownership, exactly on “chown -R 1000:1000 /opt/certbot”. The OS is TrueNAS scale, but I use direct docker :latest image (not ix-sys version).
In fact, if I run that command from the shell, it never ends.
If I do quickly after deploy these commands, npm start well.
EDIT
logs 1 file per second, so the deploy go timeout.
Same on a Arm7 Back to 2.9.22
After updating from 2.9.22 to 2.10.0 on my Synology DS it failed to start:
I did a fresh new install with minimal configuration and got the error:
Rolling back to 2.9.22 fixed the issue.
2.10.0 works on my laptop (Pop OS). Synology OS has no user with ID 1000. Maybe that’s a hint.
@befantasy I’ve been away on my honeymoon and it was great thanks for asking.
FWIW this was meant to be fixed and is fixed on all the architectures that I have access to. I do not have access to an OrangePi or Synology setup however which makes things very difficult.
As for S6 scripts, they don’t have a timeout set by default or by me, that error message is incorrect and misleading. I doubt the disk access is a contributing factor but that ownership script can be heavy depending on the filesystems so I can’t rule it out.
I’ve created a docker image that has verbose output of the s6 scripts so we can work out exactly where things are failing.
For those affected, please use this docker image and post the previous 10 lines or so prior to the error:
Also mention:
Thanks for testing everyone.
@blaine07 The env vars still work as before if they are specified, so if they work for you, keep using them 😃
Hi @adammau2 after going back to tag/label 2.9.22 I had no issues had all. I can login without any issues.
Some more info:
Mine won’t even start after one night. HDD pool.
Thanks for the ENV, they work 😃
This is how to fix it: https://github.com/truenas/charts/issues/1212#issuecomment-1568518666
Or you can just wait for 5-10mins like I did…
Ya it’s a fresh install, v2 just removed my container completely since it failed. Latest and 2.10.3 should be exact same thing, but only works if I define the version
also to add just jc21/nginx-proxy-manager (default on install) fails to start too. Only works when I specify a version. Weird
@rymancl in fact yes I am seeing more information in your output as expected.
@Sungray you can run
cert-pruneinside the docker container to clean up those archived files. Just be sure to back up your letsencrypt folder first.Not sure if I got the same issue but when I have a
PGIDwhich is different fromPUID, a cold start of version 2.10.2 didn’t work.docker-compose.yml
I’ve been using these PGID and PUID for my other Dockers just fine and only got this issue with NPM.
My workaround fix is to set PGID to the same with PUID.
My environment:
Same here on a Synology NAS
It is a “cold boot” problem.
That’s correct @barndawgie !
@antoinedelia I’m not using docker-compose files, I normally enter the docker command on the CLI for the initial creation. After that I do the updates in Portainer by changing the tag.
When I run in Portainer as a stack:
I first get an error, when I restart the container and when I restart the container everything starts up without any issues.
Just did a test for you, so I din’t create any volume mappings.
When I go to the maintenance port I get the weirdest thing:
Strangest detail is that my successful upgrade on my production container I’m seeing version v2.10.2…
And… when I login to the container that is created with the compose above I’m getting this on the CLI:
Ah… after a forced reload of the page I’m seeing 2.10.2 on the login page of my compose test. Grrr… browser cache 😃
I appreciate your time to reply; it’s appreciated.
I realize this is a bumpy road with the changes needed to made to progress forward but THANK YOU for all the hard work you’re putting into this for ALL of us. We appreciate you, your hard work and dedication. Thank YOU!!! 😀
I can confirm that github-uidgid works fine on old install that worked on 2.9.22 and was failing on 2.10.0 and 2.10.1, just had to move mysql folder from NPM data folder 😃
@jc21 Adding the Net_Bind_Service and making the container privileged worked for me. Tried both individually and neither work alone, but together they work. I’m on Ubuntu Server 20.04 LTS.
The only caveat is that I do get this error in the log (don’t know if it matters since the service is working)
Edit: formatting
I was able to reproduce the error (nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)) outside Synology DSM using Debian 10 in a VM which makes debugging easier (hopefully). Synology uses Kernel version 4 and so does Debian 10.
Docker install on Debian 10 (buster,oldstable)
Follow the quick setup instructions https://nginxproxymanager.com/guide/#quick-setup Modified compose file:
Run and analyze
Log
All HTTP Services will not be available. Portainer is not needed to reproduce the error.
Ditto. 2.10.0 has the error “‘npmuser’: no such user” and will not start. Switch back to 2.9.22, and everything works.
Host Kernel: Linux 5.19.9-Unraid x86_64
Same issue on Ubuntu. Confirmed rollback works fine.
Hi, same issue here. Rolling back to 2.9.22 did the job for now…
can confirm this issue on synology for me. Rollback on 2.9.22 worked