cookiecutter-django: Database authentication fails with Docker

What happened?

Postgres authentication fails after initial setup for a Cookiecutter project using Docker.

  • encountered locally & in production on Digital Ocean droplets (multiple)
  • In current cookiecutter release and older versions over the last few weeks

The error:

postgres_1      | 	Connection matched pg_hba.conf line 95: "host all all all md5"
postgres_1      | 2018-06-03 16:19:08.175 UTC [39] FATAL:  password authentication failed for user "xxx"
postgres_1      | 2018-06-03 16:19:08.175 UTC [39] DETAIL:  Password does not match for user "xxx".

I had this first happen in production, and I thought it was something I had done. I tried to resolve it in multiple ways in prod from setting up new droplets, updating different packages, trying many different solutions but without any success.

To debug it further this morning, I set up a fresh cookiecutter project, and decided to deploy that to Digital Ocean. I did not change anything other than the ALLOWED_HOSTS and DJANGO_SENTRY_DSN settings.

When trying to build and up locally, this error popped up for the first time on my local machine.

For context, this error was mentioned in one stackoverflow post a few months ago. The answer from @webyneter works locally, however in production, it seems like a bad idea to kill the data volume that you want to persist.

What should’ve happened instead?

There shouldn’t be authentication errors to your database when the values are stored in the .env files for either local or production development.

Steps to reproduce

  • Set up a project with docker
  • build and up
  • Try making a change, and then build and up (Make sure you are not in -d detached mode as the error will not appear
  • Result should be hundreds of lines of:
django_1        | Waiting for PostgreSQL to become available...
django_1        | Waiting for PostgreSQL to become available...
django_1        | Waiting for PostgreSQL to become available...
django_1        | Waiting for PostgreSQL to become available...
celeryworker_1  | Waiting for PostgreSQL to become available...
postgres_1      | 2018-06-03 16:19:02.895 UTC [28] FATAL:  password authentication failed for user "xxxxx"
postgres_1      | 2018-06-03 16:19:02.895 UTC [28] DETAIL:  Password does not match for user "xxxxx".
postgres_1      | 	Connection matched pg_hba.conf line 95: "host all all all md5"
celerybeat_1    | Waiting for PostgreSQL to become available...
postgres_1      | 2018-06-03 16:19:04.200 UTC [29] FATAL:  password authentication failed for user "xxxxx"
postgres_1      | 2018-06-03 16:19:04.200 UTC [29] DETAIL:  Password does not match for user "xxxxx".
postgres_1      | 	Connection matched pg_hba.conf line 95: "host all all all md5"
django_1        | Waiting for PostgreSQL to become available...
postgres_1      | 2018-06-03 16:19:04.207 UTC [30] FATAL:  password authentication failed for user "xxxxx"
postgres_1      | 2018-06-03 16:19:04.207 UTC [30] DETAIL:  Password does not match for user "xxxxx".
postgres_1      | 	Connection matched pg_hba.conf line 95: "host all all all md5"
postgres_1      | 2018-06-03 16:19:04.217 UTC [31] FATAL:  password authentication failed for user "xxxxx"
postgres_1      | 2018-06-03 16:19:04.217 UTC [31] DETAIL:  Password does not match for user "xxxxx".
postgres_1      | 	Connection matched pg_hba.conf line 95: "host all all all md5"
celeryworker_1  | Waiting for PostgreSQL to become available...
django_1        | Waiting for PostgreSQL to become available...
celerybeat_1    | Waiting for PostgreSQL to become available...

I’ve now encountered this on older cookiecutter versions and the current version as well.

Docker version: Version 18.03.1-ce-mac65 (24312)
FROM python:3.6-alpine

docker: Y celery: Y sentry: Y

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 20 (9 by maintainers)

Most upvoted comments

Ok I’ve nailed it down.

When I first create a droplet, I run eval $(docker-machine env <name>) and set the docker-machine. On future deploys, I would open new terminal windows (maybe a day or days later). I thought the eval would be set across terminal sessions, but I was wrong.

  • If I open a new terminal and up without setting eval $(docker-machine env <name>), then up goes to somewhere unknown. Does anyone understand this?
  • If I run eval and then up, it deploys properly to the production machine / Digital Ocean droplet

How I can see this:

  • If I do not run eval and enter the Postgres CLI, I can see objects that are not accessible via my web admin.
  • The command: docker-compose -f production.yml exec postgres psql -d <db_name> -U <db_user>

Questions:

  • If you run up without setting the docker-machine environment, where is it deploying to?
  • Does this make sense?

Thank you again to everyone for your help. This took much longer than expected but at least I’ve nailed it down.

@emilepetrone thank you for reporting. To clarify, every time you re-generate the project with Cookiecutter Django POSTGRES_USER and POSTGRES_PASSWORD are set to randomly-generated values (see our hooks/post_gen_project.py for implementation details) unless debug option is y since in that case those envs will always be debug:debug across setups. With that in mind, my hypothesis would be that you production setup’s postgres instance user:password are still the ones you had upon first-time project creation effectively persisted within the database’s volumes.