rqlite: With -bootstrap-expect option, if all nodes restarts with new addresses, one node become unreachable.
What version are you running? rqlite-v7.20.1-darwin-amd64
Are you using Docker or Kubernetes to run your system? Docker. But the below examples are from using rqlited on the terminal of my mac.
Are you running a single node or a cluster? Cluster
What did you do? With a clean disk (first time) following commands are run to start a 3-node cluster
Node_1
./rqlited -node-id node_1 -http-addr 0.0.0.0:4001 -raft-addr 0.0.0.0:4002 -http-adv-addr 192.168.0.182:4001 -raft-adv-addr 192.168.0.182:4002 -on-disk -on-disk-startup -on-disk-path /Users/testuser/work/rqlite_data/nodes/node_1/db/bot.sqlite -bootstrap-expect 3 -join http://192.168.0.182:4001,http://192.168.0.182:5001,http://192.168.0.182:6001 -raft-cluster-remove-shutdown=true -raft-reap-node-timeout 5s /Users/testuser/work/rqlite_data/nodes/node_1/data
Node_2
./rqlited -node-id node_2 -http-addr 0.0.0.0:5001 -raft-addr 0.0.0.0:5002 -http-adv-addr 192.168.0.182:5001 -raft-adv-addr 192.168.0.182:5002 -on-disk -on-disk-startup -on-disk-path /Users/testuser/work/rqlite_data/nodes/node_2/db/bot.sqlite -bootstrap-expect 3 -join http://192.168.0.182:4001,http://192.168.0.182:5001,http://192.168.0.182:6001 -raft-cluster-remove-shutdown=true -raft-reap-node-timeout 5s /Users/testuser/work/rqlite_data/nodes/node_2/data
Node_3
/rqlited -node-id node_3 -http-addr 0.0.0.0:6001 -raft-addr 0.0.0.0:6002 -http-adv-addr 192.168.0.182:6001 -raft-adv-addr 192.168.0.182:6002 -on-disk -on-disk-startup -on-disk-path /Users/testuser/work/rqlite_data/nodes/node_3/db/bot.sqlite -bootstrap-expect 3 -join http://192.168.0.182:4001,http://192.168.0.182:5001,http://192.168.0.182:6001 -raft-cluster-remove-shutdown=true -raft-reap-node-timeout 5s /Users/testuser/work/rqlite_data/nodes/node_3/data
Now, all nodes of the cluster are brought down (within 2-4 seconds) and started with new addresses but the same node IDs
Node_1
./rqlited -node-id node_1 -http-addr 0.0.0.0:4101 -raft-addr 0.0.0.0:4102 -http-adv-addr 192.168.0.182:4101 -raft-adv-addr 192.168.0.182:4102 -on-disk -on-disk-startup -on-disk-path /Users/testuser/work/rqlite_data/nodes/node_1/db/bot.sqlite -bootstrap-expect 3 -join http://192.168.0.182:4101,http://192.168.0.182:5101,http://192.168.0.182:6101 -raft-cluster-remove-shutdown=true -raft-reap-node-timeout 5s /Users/testuser/work/rqlite_data/nodes/node_1/data
Node_2
./rqlited -node-id node_2 -http-addr 0.0.0.0:5101 -raft-addr 0.0.0.0:5102 -http-adv-addr 192.168.0.182:5101 -raft-adv-addr 192.168.0.182:5102 -on-disk -on-disk-startup -on-disk-path /Users/testuser/work/rqlite_data/nodes/node_2/db/bot.sqlite -bootstrap-expect 3 -join http://192.168.0.182:4101,http://192.168.0.182:5101,http://192.168.0.182:6101 -raft-cluster-remove-shutdown=true -raft-reap-node-timeout 5s /Users/testuser/work/rqlite_data/nodes/node_2/data
Node_3
/rqlited -node-id node_3 -http-addr 0.0.0.0:6101 -raft-addr 0.0.0.0:6102 -http-adv-addr 192.168.0.182:6101 -raft-adv-addr 192.168.0.182:6102 -on-disk -on-disk-startup -on-disk-path /Users/testuser/work/rqlite_data/nodes/node_3/db/bot.sqlite -bootstrap-expect 3 -join http://192.168.0.182:4101,http://192.168.0.182:5101,http://192.168.0.182:6101 -raft-cluster-remove-shutdown=true -raft-reap-node-timeout 5s /Users/testuser/work/rqlite_data/nodes/node_3/data
What did you expect to happen? After restarts, I expect all nodes to be part of the cluster, healthy and reachable
What happened instead?
One node node_3 becomes unreachable. /status and /nodes API response shows the older raft address of that (unreachable) node getting used.
Please include the Status, Nodes, and Expvar output from each node (or at least the Leader!)
/nodes API response before restarts
{
"node_1": {
"api_addr": "http://192.168.0.182:4001/",
"addr": "192.168.0.182:4002",
"reachable": true,
"leader": false,
"time": 0.000917959
},
"node_2": {
"api_addr": "http://192.168.0.182:5001/",
"addr": "192.168.0.182:5002",
"reachable": true,
"leader": true,
"time": 0.000671792
},
"node_3": {
"api_addr": "http://192.168.0.182:6001/",
"addr": "192.168.0.182:6002",
"reachable": true,
"leader": false,
"time": 0.000809125
}
}
/nodes API response after restarts
{
"node_1": {
"api_addr": "http://192.168.0.182:4101/",
"addr": "192.168.0.182:4102",
"reachable": true,
"leader": false,
"time": 0.004925208
},
"node_2": {
"api_addr": "http://192.168.0.182:5101/",
"addr": "192.168.0.182:5102",
"reachable": true,
"leader": false,
"time": 0.000864792
},
"node_3": {
"addr": "192.168.0.182:6002",
"reachable": false,
"leader": false,
"error": "factory is not able to fill the pool: dial tcp 192.168.0.182:6002: connect: connection refused"
}
}
Logs of node_3 after the restart
[rqlited] 2023/06/02 15:03:21 rqlited starting, version v7.20.1, SQLite 3.42.0, commit bec8c93c5ab703548caa6688970a69b2a339e763, branch master, compiler gc
[rqlited] 2023/06/02 15:03:21 go1.19.1, target architecture is amd64, operating system target is darwin
[rqlited] 2023/06/02 15:03:21 launch command: ./rqlited -node-id node_3 -http-addr 0.0.0.0:6101 -raft-addr 0.0.0.0:6102 -http-adv-addr 192.168.0.182:6101 -raft-adv-addr 192.168.0.182:6102 -on-disk -on-disk-startup -on-disk-path /Users/testuser/work/rqlite_data/nodes/node_3/db/bot.sqlite -bootstrap-expect 3 -join http://192.168.0.182:4101,http://192.168.0.182:5101,http://192.168.0.182:6101 -raft-cluster-remove-shutdown=true -raft-reap-node-timeout 5s /Users/testuser/work/rqlite_data/nodes/node_3/data
[rqlited] 2023/06/02 15:03:21 Raft TCP mux Listener registered with byte header 1
[rqlited] 2023/06/02 15:03:21 preexisting node state detected in /Users/testuser/work/rqlite_data/nodes/node_3/data
[mux] 2023/06/02 15:03:21 mux serving on [::]:6102, advertising 192.168.0.182:6102
[cluster] 2023/06/02 15:03:21 service listening on 192.168.0.182:6102
[rqlited] 2023/06/02 15:03:21 cluster TCP mux Listener registered with byte header 2
[http] 2023/06/02 15:03:21 execute queue processing started with capacity 1024, batch size 128, timeout 50ms
[http] 2023/06/02 15:03:21 service listening on [::]:6101
[rqlited] 2023/06/02 15:03:21 HTTP server started
[store] 2023/06/02 15:03:21 opening store with node ID node_3
[store] 2023/06/02 15:03:21 configured for an on-disk database at /Users/testuser/work/rqlite_data/nodes/node_3/db/bot.sqlite
[store] 2023/06/02 15:03:21 on-disk database in-memory creation disabled
[store] 2023/06/02 15:03:21 ensuring directory for on-disk database exists at /Users/testuser/work/rqlite_data/nodes/node_3/db
[store] 2023/06/02 15:03:21 ensuring directory for Raft exists at /Users/testuser/work/rqlite_data/nodes/node_3/data
[store] 2023/06/02 15:03:21 0 preexisting snapshots present
[store] 2023/06/02 15:03:21 first log index: 1, last log index: 5, last command log index: 0:
[store] 2023/06/02 15:03:21 created on-disk database at open
2023-06-02T15:03:21.844+0530 [INFO] raft: initial configuration: index=4 servers="[{Suffrage:Voter ID:node_3 Address:192.168.0.182:6002}]"
2023-06-02T15:03:21.844+0530 [INFO] raft: entering follower state: follower="Node at 192.168.0.182:6102 [Follower]" leader-address= leader-id=
[cluster-join] 2023/06/02 15:03:21 failed to join via node at http://192.168.0.182:4101: 503 Service Unavailable: (leader not found
)
[cluster-join] 2023/06/02 15:03:21 failed to join via node at http://192.168.0.182:5101: 503 Service Unavailable: (leader not found
)
[http] 2023/06/02 15:03:21 received join request from node with ID node_3 at 192.168.0.182:6102
[cluster-join] 2023/06/02 15:03:21 failed to join via node at http://192.168.0.182:6101: 503 Service Unavailable: (leader not found
)
[cluster-join] 2023/06/02 15:03:21 failed to join cluster at [http://192.168.0.182:4101 http://192.168.0.182:5101 http://192.168.0.182:6101], after 1 attempt(s)
[cluster-bootstrap] 2023/06/02 15:03:21 succeeded notifying target: http://192.168.0.182:4101/notify
[cluster-bootstrap] 2023/06/02 15:03:21 succeeded notifying target: http://192.168.0.182:5101/notify
[http] 2023/06/02 15:03:21 received notify request from node with ID node_3 at 192.168.0.182:6102
[cluster-bootstrap] 2023/06/02 15:03:21 succeeded notifying target: http://192.168.0.182:6101/notify
[cluster-bootstrap] 2023/06/02 15:03:21 succeeded notifying all targets: [http://192.168.0.182:4101 http://192.168.0.182:5101 http://192.168.0.182:6101]
[http] 2023/06/02 15:03:22 received join request from node with ID node_2 at 192.168.0.182:5102
[http] 2023/06/02 15:03:22 received notify request from node with ID node_2 at 192.168.0.182:5102
[http] 2023/06/02 15:03:22 received join request from node with ID node_1 at 192.168.0.182:4102
[http] 2023/06/02 15:03:22 received notify request from node with ID node_1 at 192.168.0.182:4102
[store] 2023/06/02 15:03:22 reached expected bootstrap count of 3, starting cluster bootstrap
[store] 2023/06/02 15:03:22 cluster bootstrap failed: bootstrap only works on new clusters
2023-06-02T15:03:23.416+0530 [WARN] raft: heartbeat timeout reached, starting election: last-leader-addr= last-leader-id=
2023-06-02T15:03:23.417+0530 [INFO] raft: entering candidate state: node="Node at 192.168.0.182:6102 [Candidate]" term=4
2023-06-02T15:03:23.463+0530 [INFO] raft: election won: term=4 tally=1
2023-06-02T15:03:23.464+0530 [INFO] raft: entering leader state: leader="Node at 192.168.0.182:6102 [Leader]"
[http] 2023/06/02 15:03:24 received join request from node with ID node_2 at 192.168.0.182:5102
2023-06-02T15:03:24.792+0530 [INFO] raft: updating configuration: command=AddVoter server-id=node_2 server-addr=192.168.0.182:5102 servers="[{Suffrage:Voter ID:node_3 Address:192.168.0.182:6002} {Suffrage:Voter ID:node_2 Address:192.168.0.182:5102}]"
[http] 2023/06/02 15:03:24 received join request from node with ID node_1 at 192.168.0.182:4102
2023-06-02T15:03:24.831+0530 [INFO] raft: added peer, starting replication: peer=node_2
2023-06-02T15:03:24.831+0530 [INFO] raft: updating configuration: command=AddVoter server-id=node_1 server-addr=192.168.0.182:4102 servers="[{Suffrage:Voter ID:node_3 Address:192.168.0.182:6002} {Suffrage:Voter ID:node_2 Address:192.168.0.182:5102} {Suffrage:Voter ID:node_1 Address:192.168.0.182:4102}]"
[store] 2023/06/02 15:03:24 node with ID node_2, at 192.168.0.182:5102, joined successfully as voter
2023-06-02T15:03:24.961+0530 [WARN] raft: appendEntries rejected, sending older logs: peer="{Voter node_2 192.168.0.182:5102}" next=5
2023-06-02T15:03:24.990+0530 [INFO] raft: added peer, starting replication: peer=node_1
2023-06-02T15:03:25.063+0530 [INFO] raft: pipelining replication: peer="{Voter node_2 192.168.0.182:5102}"
2023-06-02T15:03:25.088+0530 [WARN] raft: appendEntries rejected, sending older logs: peer="{Voter node_1 192.168.0.182:4102}" next=4
[store] 2023/06/02 15:03:25 node with ID node_1, at 192.168.0.182:4102, joined successfully as voter
2023-06-02T15:03:25.209+0530 [INFO] raft: pipelining replication: peer="{Voter node_1 192.168.0.182:4102}"
[cluster-bootstrap] 2023/06/02 15:03:25 boot operation marked done
[rqlited] 2023/06/02 15:03:25 node HTTP API available at http://192.168.0.182:6101
[rqlited] 2023/06/02 15:03:25 connect using the command-line tool via 'rqlite -H 192.168.0.182 -p 6101'
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 44 (28 by maintainers)
This is a related discussion (and nice explanation) on the Raft code repo itself: https://github.com/hashicorp/raft/issues/532
@bhushanvadgave – perhaps I just need to make all this clearer in the rqlite.io docs, and then close this bug?