mongo: Cannot configure replica sets with entrypoint-initdb

I’m trying to create simple replicaset enabled images using these docker images. @yosifkit had a suggestion in another thread that this can be done by calling rs.initiate() in a /docker-entrypoint-initdb.d/ script.

However if I do this I get the following:

2019-03-26T12:30:25.889+0000 I COMMAND  [conn2] initiate : no configuration specified. Using a default configuration for the set
2019-03-26T12:30:25.889+0000 I COMMAND  [conn2] created this configuration for initiation : { _id: "rs0", version: 1, members: [ { _id: 0, host: "127.0.0.1:27017" } ] }

The problem here is that since the init script binds only to localhost, the replicaset has the wrong hostname.

If I call rs.initiate() after initialization phase I get the following:

2019-03-26T12:32:16.792+0000 I COMMAND  [conn1] initiate : no configuration specified. Using a default configuration for the set
2019-03-26T12:32:16.793+0000 I COMMAND  [conn1] created this configuration for initiation : { _id: "rs0", version: 1, members: [ { _id: 0, host: "mongo:27017" } ] }

This time with the correct hostname.

Is there some way we can resolve this paradox? Either by running a script after the real startup? Or by binding to the proper interfaces during initialisation? Or by forcing the mongo server to accept a replicaset config even if it cannot resolve itself?

Thanks!

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 7
  • Comments: 19 (2 by maintainers)

Most upvoted comments

The docker-entrypoint-initdb.d scripts run during an initialization period (and only if the database is empty), during which the container is only listening on localhost so trying to initiate a cluster during that period isn’t possible as it won’t resolve it’s own container hostname.

So you’ll probably need some manual intervention after everything is initialized, as using the docker-entrypoint-initdb.d will error with replSet initiate got NodeNotFound: No host described in new configuration 1 for replica set myrepl maps to this node, but then running the same rs.initiate() will work afterwards

Somehow, I wrote an ugly hack to initialize replica set by abusing docker-entrypoint-initdb.d. Hope that helps someone coming to this issue.

I’v come up a work-around using healthcheck instead of docker-entrypoint-initdb.d .

Define a js script to detect master status, then do init once if necessary:

init = false;
if (!db.isMaster().ismaster) {
  print("Error: primary not ready, initialize ...")
  rs.initiate();
  quit(1);
} else {
  if (!init) {
    admin = db.getSiblingDB("admin");

    admin.createUser(
      {
        user: "test",
        pwd: "pass",
        roles: ["readWriteAnyDatabase"]
      }
    );
    init = true;
  }
}

In docker-compose, define a healthcheck using the script:

    mongodb:
      image: mongo:4.0
      environment:
        - AUTH=no # without password
      tmpfs: /data/db
      hostname: mongodb
      volumes:
        - "./volume/mongo-init2.js:/mongo-init.js"
      command: [mongod, --replSet, 'rs2', --noauth, --maxConns, "10000"]
      healthcheck:
        test: mongo /mongo-init.js
        interval: 5s

Though this requires further healthy coordination between services, where compose version 3.9 is required to support condition: service_healthy

version: "3.9"
...
services:
...
    depends_on:
      mongodb:
        # requires compose version prior 3.0, or 3.9+, but not between
        # see https://stackoverflow.com/a/41854997/108112
        condition: service_healthy

What a hack @zhangyoufu! I think there needs to be some support for this in the official docker image. There are several features of MongoDB only available with a replica set and this is a common production configuration. So it makes local development hard if we cannot setup a dev db with the same configuration.

Given the current life cycle in which said docker-entrypoint-initdb.d scripts execute won’t accommodate the configuration of a replica set (and possibly other configuration changes dependent on later stages) and this issue, as well as various other online discussions, circulate such ‘workarounds’ to do so, it seems there might be demand for a feature which would provide for a docker-entrypoint-post-dbstartup.d folder to house scripts execute after everything is ready

In order to properly do that (run processes after starting the “real” long-term mongod), we would need a supervisor-like process and that is not complexity that we want to add or maintain (and is beyond a “simple” bash script).

By supervisor process, I mean basically the following:

  1. stay resident for the life of the container
  2. respond to and forward signals
  3. reap zombies
  4. run current temp server up/down + initdb.d scripts
  5. logs from mongod and scripts both appear in stdout/stderr appropriately
  6. start “real” mongod and do something if it exits unexpectedly
  7. once “real” mongod is “ready”, trigger post-dbstartup scripts
  8. do something (exit container?) if post-dbstartup scripts fail in any way or if they “never” finish
  9. clean-up behavior for failed post-dbstartup scripts?

You don’t have to deal with the first 6 if the container is used as-is since that is “free” because of how the image is designed. And then the correct solution is to use your orchestration platform to run something to coordinate the initialization and joining of multiple mongo containers.

Run rs.initiate() on just one and only one mongod instance for the replica set.

- https://www.mongodb.com/docs/manual/tutorial/deploy-replica-set/#initiate-the-replica-set

Hmm - I did miss the “my hurried dockerfile will not actually work” point sorry!

Still I think the idea has merit; but-for the squelching of the --bind-ip arguments this would work just fine, and would have some value for single instance deployment / unit test / many other basic cases. [I realise the whole point of force binding localhost is to ensure nobody can connect until the startup scripts are complete. Not sure how to reconcile that.]

If there’s a direction you think would be acceptable and could be made to work I’d be willing to have a go at implementing it.