etcd: Etcd start failed after power off and restart
In such that, I hava an etcd cluster for kubernetes which has 3 members. And all machines were powered off yesterday, however, all of the etcd members start failed after power is restored.
My etcd version is 3.4.2
.
The etcd member01’s error is failed to find database snapshot file (snap: snapshot file doesn't exist)
, and the errors of member02 and member03 are same: freepages: failed to get all reachable pages
.
The full logs are as follows:
member01:
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.680835 I | etcdmain: etcd Version: 3.4.2
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.680840 I | etcdmain: Git SHA: a7cf1ca
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.680845 I | etcdmain: Go Version: go1.12.5
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.680849 I | etcdmain: Go OS/Arch: linux/amd64
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.680855 I | etcdmain: setting maximum number of CPUs to 16, total number of available CPUs is 16
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.680926 W | etcdmain: found invalid file/dir .bash_logout under data dir /var/lib/etcd (Ignore this if you are upgrading etcd)
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.680937 W | etcdmain: found invalid file/dir .bashrc under data dir /var/lib/etcd (Ignore this if you are upgrading etcd)
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.680942 W | etcdmain: found invalid file/dir .profile under data dir /var/lib/etcd (Ignore this if you are upgrading etcd)
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.680952 N | etcdmain: the server is already initialized as member before, starting as etcd member...
May 26 22:00:14 mgt01 etcd[6323]: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.680997 I | embed: peerTLS: cert = /etc/ssl/etcd/ssl/member-mgt01.pem, key = /etc/ssl/etcd/ssl/member-mgt01-key.pem, trusted-ca = /etc/ssl/etcd/ssl/ca.pem, client-cert-auth = true, crl-file =
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.681694 I | embed: name = etcd-mgt01
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.681708 I | embed: data dir = /var/lib/etcd
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.681713 I | embed: member dir = /var/lib/etcd/member
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.681719 I | embed: heartbeat = 100ms
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.681724 I | embed: election = 1000ms
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.681728 I | embed: snapshot count = 10000
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.681737 I | embed: advertise client URLs = https://10.61.109.41:2379
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.681746 I | embed: initial advertise peer URLs = https://10.61.109.41:2380
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.681753 I | embed: initial cluster =
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.689611 I | etcdserver: recovered store from snapshot at index 21183302
May 26 22:00:14 mgt01 etcd[6323]: 2020-05-26 14:00:14.693186 C | etcdserver: recovering backend from snapshot error: failed to find database snapshot file (snap: snapshot file doesn't exist)
May 26 22:00:14 mgt01 etcd[6323]: panic: recovering backend from snapshot error: failed to find database snapshot file (snap: snapshot file doesn't exist)
May 26 22:00:14 mgt01 etcd[6323]: panic: runtime error: invalid memory address or nil pointer dereference
May 26 22:00:14 mgt01 etcd[6323]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0xc2c6be]
May 26 22:00:14 mgt01 etcd[6323]: goroutine 1 [running]:
May 26 22:00:14 mgt01 etcd[6323]: go.etcd.io/etcd/etcdserver.NewServer.func1(0xc0002eef50, 0xc0002ecf48)
May 26 22:00:14 mgt01 etcd[6323]: /opt/go/src/go.etcd.io/etcd/etcdserver/server.go:335 +0x3e
May 26 22:00:14 mgt01 etcd[6323]: panic(0xed6540, 0xc00062a080)
May 26 22:00:14 mgt01 etcd[6323]: /usr/local/go/src/runtime/panic.go:522 +0x1b5
May 26 22:00:14 mgt01 etcd[6323]: go.etcd.io/etcd/vendor/github.com/coreos/pkg/capnslog.(*PackageLogger).Panicf(0xc0001fe7c0, 0x10af294, 0x2a, 0xc0002ed018, 0x1, 0x1)
May 26 22:00:14 mgt01 etcd[6323]: /opt/go/src/go.etcd.io/etcd/vendor/github.com/coreos/pkg/capnslog/pkg_logger.go:75 +0x135
May 26 22:00:14 mgt01 etcd[6323]: go.etcd.io/etcd/etcdserver.NewServer(0xc00004208a, 0xa, 0x0, 0x0, 0x0, 0x0, 0xc000198d00, 0x1, 0x1, 0xc000198e80, ...)
May 26 22:00:14 mgt01 etcd[6323]: /opt/go/src/go.etcd.io/etcd/etcdserver/server.go:456 +0x42f7
May 26 22:00:14 mgt01 etcd[6323]: go.etcd.io/etcd/embed.StartEtcd(0xc0001bc580, 0xc0001bcb00, 0x0, 0x0)
May 26 22:00:14 mgt01 etcd[6323]: /opt/go/src/go.etcd.io/etcd/embed/etcd.go:211 +0x9d0
May 26 22:00:14 mgt01 etcd[6323]: go.etcd.io/etcd/etcdmain.startEtcd(0xc0001bc580, 0x10849de, 0x6, 0x1, 0xc00020f1d0)
May 26 22:00:14 mgt01 etcd[6323]: /opt/go/src/go.etcd.io/etcd/etcdmain/etcd.go:302 +0x40
May 26 22:00:14 mgt01 etcd[6323]: go.etcd.io/etcd/etcdmain.startEtcdOrProxyV2()
May 26 22:00:14 mgt01 etcd[6323]: /opt/go/src/go.etcd.io/etcd/etcdmain/etcd.go:144 +0x2f71
May 26 22:00:14 mgt01 etcd[6323]: go.etcd.io/etcd/etcdmain.Main()
May 26 22:00:14 mgt01 etcd[6323]: /opt/go/src/go.etcd.io/etcd/etcdmain/main.go:46 +0x38
May 26 22:00:14 mgt01 etcd[6323]: main.main()
May 26 22:00:14 mgt01 etcd[6323]: /opt/go/src/go.etcd.io/etcd/main.go:28 +0x20
member02:
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.749838 I | etcdmain: etcd Version: 3.4.2
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.749845 I | etcdmain: Git SHA: a7cf1ca
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.749850 I | etcdmain: Go Version: go1.12.5
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.749855 I | etcdmain: Go OS/Arch: linux/amd64
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.749861 I | etcdmain: setting maximum number of CPUs to 16, total number of available CPUs is 16
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.749973 W | etcdmain: found invalid file/dir .bash_logout under data dir /var/lib/etcd (Ignore this if you are upgrading etcd)
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.749984 W | etcdmain: found invalid file/dir .bashrc under data dir /var/lib/etcd (Ignore this if you are upgrading etcd)
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.749992 W | etcdmain: found invalid file/dir .profile under data dir /var/lib/etcd (Ignore this if you are upgrading etcd)
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.750001 N | etcdmain: the server is already initialized as member before, starting as etcd member...
May 26 22:00:13 mgt02 etcd[3482]: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.750037 I | embed: peerTLS: cert = /etc/ssl/etcd/ssl/member-mgt02.pem, key = /etc/ssl/etcd/ssl/member-mgt02-key.pem, trusted-ca = /etc/ssl/etcd/ssl/ca.pem, client-cert-auth = true, crl-file =
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.751369 I | embed: name = etcd-mgt02
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.751385 I | embed: data dir = /var/lib/etcd
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.751391 I | embed: member dir = /var/lib/etcd/member
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.751396 I | embed: heartbeat = 100ms
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.751401 I | embed: election = 1000ms
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.751406 I | embed: snapshot count = 10000
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.751415 I | embed: advertise client URLs = https://10.61.109.42:2379
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.751421 I | embed: initial advertise peer URLs = https://10.61.109.42:2380
May 26 22:00:13 mgt02 etcd[3482]: 2020-05-26 14:00:13.751428 I | embed: initial cluster =
May 26 22:00:13 mgt02 etcd[3482]: panic: freepages: failed to get all reachable pages (page 6148948888203174444: out of bounds: 14622)
May 26 22:00:13 mgt02 etcd[3482]: goroutine 145 [running]:
May 26 22:00:13 mgt02 etcd[3482]: go.etcd.io/etcd/vendor/go.etcd.io/bbolt.(*DB).freepages.func2(0xc00023c120)
May 26 22:00:13 mgt02 etcd[3482]: /opt/go/src/go.etcd.io/etcd/vendor/go.etcd.io/bbolt/db.go:1003 +0xe5
May 26 22:00:13 mgt02 etcd[3482]: created by go.etcd.io/etcd/vendor/go.etcd.io/bbolt.(*DB).freepages
May 26 22:00:13 mgt02 etcd[3482]: /opt/go/src/go.etcd.io/etcd/vendor/go.etcd.io/bbolt/db.go:1001 +0x1b5
Finally, we recovered most of the data by our backup file. However, we wonder to know why this happend. Is there any idea to avoid this issue? Thanks!!!
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 36 (10 by maintainers)
I build a k8s cluster with kubeadm containing one control plane node and one worker node. The cluster contains one etcd 3.5.1 server in the control plane node. Unfortunately, I also meet several times of etcd error “failed to recover v3 backend from snapshot”.
I studied etcd source files and found the condition to trigger this kind of error: https://github.com/etcd-io/etcd/blob/986a2b51f4e87fe72e3fa2e85394dd659268dfcb/server/storage/backend.go#L99-L109 I.e. the condition
snapshot.Metadata.Index <= db.consistentIndex
is violated. It means the consistent index in snap/db file is older than the one in one of snap/*.snap files. But in general cases, it’s impossible(?), because the following program ensures(?) the consistent index is written to snap/db file before creating a .snap file: https://github.com/etcd-io/etcd/blob/986a2b51f4e87fe72e3fa2e85394dd659268dfcb/server/etcdserver/server.go#L2019 Thus, the steps to reproduce violation of the conditionsnapshot.Metadata.Index <= db.consistentIndex
are unknown and difficult to find…Nevertheless, I propose a workaround to make etcd back to work: (Notice, this workaround doesn’t guarantee no data loss. Please make a backup and try at your risk!)
or
Then, your etcd might start without crash.
it’s working,and my location is
/var/lib/rancher/k3s/server/db/etcd/member/
no real solutions! I also encountered the same problem. my etcd version is 3.4.13-0
Somehow I was able to recover. I commented here in case someone interested