How to Reset a Cluster (from node 0)

If any other node than node 0 has further proceeded in transactions, this will result in data loss. Always take a backup of the data directory before taking any further steps.

If the cluster lost a majority of nodes, it won’t be able to start again on it’s own. Since it can’t know which node has the latest data, you need to force the bootstrap.

See Bootstraping a Node if you have to reset the cluster from another node than node 0.

Any of the following changes will be reset if a change to the MariaDB service (Crossplane composition) is being rolled out.

Reset cluster to 1 node

Save the instance ID of the affected cluster
export INSTANCE_ID=<instance-id>
Scale cluster to 1 replica
kubectl -n $INSTANCE_ID scale statefulset mariadb \
    --replicas 1
Enable force bootstrap
kubectl -n $INSTANCE_ID set env statefulset/mariadb \
    -c mariadb-galera \
    MARIADB_GALERA_FORCE_SAFETOBOOTSTRAP="yes"
If the nodes are already crashing or not ready, delete all pods
kubectl -n $INSTANCE_ID delete pods -l app.kubernetes.io/name=mariadb-galera
Disable force bootstrap again
kubectl -n $INSTANCE_ID set env statefulset/mariadb -c mariadb-galera \
    MARIADB_GALERA_FORCE_SAFETOBOOTSTRAP="no"

Restore cluster to 3 nodes

Once the single node is bootstrapped and running, scale the cluster to 3
kubectl -n $INSTANCE_ID scale statefulset mariadb \
    --replicas 3

Always make sure to disable force bootstrap before scaling a cluster up.