mariadb-operator: [Bug] Cannot deploy Galera cluster

Describe the bug failed to create mariadb/mariadb-galera mariadb.mmontes.io/v1alpha1, Kind=MariaDB for df1dc4c9-50f9-469e-8e18-3f4c82d4a521: admission webhook "vmariadb.kb.io" denied the request: spec.replicas: Invalid value: 3: Multiple replicas can only be specified when 'spec.replication' or 'spec.galera' are configured

Steps to reproduce the bug Deploy a Galera cluster with given example manifests

Additional context Example manifests are used and hence spec.galera.enables=true is set. Still this appears

Environment details:

  • Kubernetes version: 1.26
  • mariadb-operator version: 0.16
  • Install method: helm
  • Install flavour: minimal, recommended or custom (all of them)

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 38 (21 by maintainers)

Most upvoted comments

This has been addressed as part of the v0.0.19 release:

https://github.com/mariadb-operator/mariadb-operator/releases/tag/v0.0.19

Kudos to @lgohyipex for spotting and fixing the issue 🙇

Closing!

Hello, i was facing the same issue and finally found a clue of what’s going on.

The init container is never bootstrapping the cluster because the folder /var/lib/mysql is not empty as expected. https://github.com/mariadb-operator/init/blob/main/main.go#L106

The folder /var/lib/mysql contains a lost+found sub-folder.

Edit: Environment is EKS (kubernetes 1.26) and volume is provisionned by Amazon EBS CSI Driver (gp3 ext4)

I have tried it with the myCnf, but no luck.

This is the events log:

LAST SEEN   TYPE     REASON             OBJECT                               MESSAGE
2m11s       Normal   NoPods             poddisruptionbudget/mariadb-galera   No matching pods found
2m11s       Normal   SuccessfulCreate   statefulset/mariadb-galera           create Claim storage-mariadb-galera-0 Pod mariadb-galera-0 in StatefulSet mariadb-galera success
2m11s       Normal   SuccessfulCreate   statefulset/mariadb-galera           create Claim galera-mariadb-galera-0 Pod mariadb-galera-0 in StatefulSet mariadb-galera success
2m11s       Normal   SuccessfulCreate   statefulset/mariadb-galera           create Pod mariadb-galera-0 in StatefulSet mariadb-galera successful
2m11s       Normal   SuccessfulCreate   statefulset/mariadb-galera           create Claim storage-mariadb-galera-1 Pod mariadb-galera-1 in StatefulSet mariadb-galera success
2m11s       Normal   SuccessfulCreate   statefulset/mariadb-galera           create Claim galera-mariadb-galera-1 Pod mariadb-galera-1 in StatefulSet mariadb-galera success
2m11s       Normal   SuccessfulCreate   statefulset/mariadb-galera           create Pod mariadb-galera-1 in StatefulSet mariadb-galera successful
2m11s       Normal   SuccessfulCreate   statefulset/mariadb-galera           create Claim storage-mariadb-galera-2 Pod mariadb-galera-2 in StatefulSet mariadb-galera success
2m11s       Normal   SuccessfulCreate   statefulset/mariadb-galera           create Claim galera-mariadb-galera-2 Pod mariadb-galera-2 in StatefulSet mariadb-galera success
2m11s       Normal   SuccessfulCreate   statefulset/mariadb-galera           create Pod mariadb-galera-2 in StatefulSet mariadb-galera successful

Yes. This one works flawlessly.

I was leaning towards network issues to based on the 2023-07-16T17:52:06 -> 2023-07-16 17:52:18 gap. Do you have the logs of the other 10.42.3.228 node as the same time?