Add a storage node

Steps to add a storage node to an OpenShift 4 cluster on Exoscale.

Starting situation

  • You already have a OpenShift 4 cluster on Exoscale

  • You have admin-level access to the cluster

  • You want to add a new storage node to the cluster

Prerequisites

The following CLI utilities need to be available locally:

Prepare local environment

  1. Create local directory to work in

    We strongly recommend creating an empty directory, unless you already have a work directory for the cluster you’re about to work on. This guide will run Commodore in the directory created in this step.

    export WORK_DIR=/path/to/work/dir
    mkdir -p "${WORK_DIR}"
    pushd "${WORK_DIR}"
  2. Configure API access

    Access to cloud API
    export EXOSCALE_API_KEY=<exoscale-key> (1)
    export EXOSCALE_API_SECRET=<exoscale-secret>
    export EXOSCALE_ZONE=<exoscale-zone> (2)
    export EXOSCALE_S3_ENDPOINT="sos-${EXOSCALE_ZONE}.exo.io"
    1 We recommend setting up an IAMv3 role called unrestricted with "Default Service Strategy" set to allow if it doesn’t exist yet.
    2 All lower case. For example ch-dk-2.
    Access to VSHN GitLab
    # From https://git.vshn.net/-/profile/personal_access_tokens, "api" scope is sufficient
    export GITLAB_TOKEN=<gitlab-api-token>
    export GITLAB_USER=<gitlab-user-name>
Access to VSHN Lieutenant
# For example: https://api.syn.vshn.net
# IMPORTANT: do NOT add a trailing `/`. Commands below will fail.
export COMMODORE_API_URL=<lieutenant-api-endpoint>

# Set Project Syn cluster and tenant ID
export CLUSTER_ID=<lieutenant-cluster-id> # Looks like: c-<something>
export TENANT_ID=$(curl -sH "Authorization: Bearer $(commodore fetch-token)" ${COMMODORE_API_URL}/clusters/${CLUSTER_ID} | jq -r .tenant)
Configuration for hieradata commits
export GIT_AUTHOR_NAME=$(git config --global user.name)
export GIT_AUTHOR_EMAIL=$(git config --global user.email)
export TF_VAR_control_vshn_net_token=<control-vshn-net-token> # use your personal SERVERS API token from https://control.vshn.net/tokens
  1. Get required tokens from Vault

    Connect with Vault
    export VAULT_ADDR=https://vault-prod.syn.vshn.net
    vault login -method=oidc
    Grab the LB hieradata repo token from Vault
    export HIERADATA_REPO_SECRET=$(vault kv get \
      -format=json "clusters/kv/lbaas/hieradata_repo_token" | jq '.data.data')
    export HIERADATA_REPO_USER=$(echo "${HIERADATA_REPO_SECRET}" | jq -r '.user')
    export HIERADATA_REPO_TOKEN=$(echo "${HIERADATA_REPO_SECRET}" | jq -r '.token')
  2. Compile the catalog for the cluster. Having the catalog available locally enables us to run Terraform for the cluster to make any required changes.

    commodore catalog compile "${CLUSTER_ID}"

Set alert silence

  1. Set a silence in Alertmanager for all rook-ceph alerts

    if [[ "$OSTYPE" == "darwin"* ]]; then alias date=gdate; fi
    job_name=$(printf "POST-silence-rook-ceph-alerts-$(date +%s)" | tr '[:upper:]' '[:lower:]')
    silence_duration='+60 minutes' (1)
    kubectl --as=cluster-admin -n openshift-monitoring create -f- <<EOJ
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: ${job_name}
      labels:
        app: silence-rook-ceph-alerts
    spec:
     backoffLimit: 0
     template:
      spec:
        restartPolicy: Never
        containers:
          - name: silence
            image: quay.io/appuio/oc:v4.13
            command:
            - bash
            - -c
            - |
              curl_opts=( --cacert /etc/ssl/certs/serving-certs/service-ca.crt --header "Content-Type: application/json" --header "Authorization: Bearer \$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --resolve alertmanager-main.openshift-monitoring.svc.cluster.local:9095:\$(getent hosts alertmanager-operated.openshift-monitoring.svc.cluster.local | awk '{print \$1}' | head -n 1) --silent )
              read -d "" body << EOF
              {
                "matchers": [
                  {
                    "name": "syn_component",
                    "value": "rook-ceph",
                    "isRegex": false
                  }
                ],
                "startsAt": "$(date -u +'%Y-%m-%dT%H:%M:%S')",
                "endsAt": "$(date -u +'%Y-%m-%dT%H:%M:%S' --date "${silence_duration}")",
                "createdBy": "$(kubectl config current-context | cut -d/ -f3)",
                "comment": "Silence rook-ceph alerts"
              }
              EOF
    
              curl "\${curl_opts[@]}" \
                "https://alertmanager-main.openshift-monitoring.svc.cluster.local:9095/api/v2/silences" \
                -XPOST -d "\${body}"
    
            volumeMounts:
            - mountPath: /etc/ssl/certs/serving-certs/
              name: ca-bundle
              readOnly: true
            - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
              name: kube-api-access
              readOnly: true
        serviceAccountName: prometheus-k8s
        volumes:
        - name: ca-bundle
          configMap:
            defaultMode: 288
            name: serving-certs-ca-bundle
        - name: kube-api-access
          projected:
            defaultMode: 420
            sources:
              - serviceAccountToken:
                  expirationSeconds: 3607
                  path: 'token'
    EOJ
    1 Adjust this variable to create a longer or shorter silence
  2. Extract Alertmanager silence ID from job logs

    silence_id=$(kubectl --as=cluster-admin -n openshift-monitoring logs jobs/${job_name} | \
      jq -r '.silenceID')

Update Cluster Config

  1. Update cluster config.

    pushd "inventory/classes/${TENANT_ID}/"
    
    yq eval -i ".parameters.openshift4_terraform.terraform_variables.storage_count =
      (.parameters.openshift4_terraform.terraform_variables.storage_count // 3) + 1" \
      ${CLUSTER_ID}.yml
    
    yq eval -i ".parameters.rook_ceph.ceph_cluster.node_count =
      (.parameters.rook_ceph.ceph_cluster.node_count // 3) + 1" \
      ${CLUSTER_ID}.yml
  2. Review and commit

    # Have a look at the file ${CLUSTER_ID}.yml.
    
    git commit -a -m "Add storage node to cluster ${CLUSTER_ID}"
    git push
    
    popd
  3. Compile and push cluster catalog

    commodore catalog compile ${CLUSTER_ID} --push -i

Prepare Terraform environment

  1. Configure Terraform secrets

    cat <<EOF > ./terraform.env
    EXOSCALE_API_KEY
    EXOSCALE_API_SECRET
    TF_VAR_control_vshn_net_token
    GIT_AUTHOR_NAME
    GIT_AUTHOR_EMAIL
    HIERADATA_REPO_TOKEN
    EOF
  2. Setup Terraform

    Prepare Terraform execution environment
    # Set terraform image and tag to be used
    tf_image=$(\
      yq eval ".parameters.openshift4_terraform.images.terraform.image" \
      dependencies/openshift4-terraform/class/defaults.yml)
    tf_tag=$(\
      yq eval ".parameters.openshift4_terraform.images.terraform.tag" \
      dependencies/openshift4-terraform/class/defaults.yml)
    
    # Generate the terraform alias
    base_dir=$(pwd)
    alias terraform='docker run -it --rm \
      -e REAL_UID=$(id -u) \
      --env-file ${base_dir}/terraform.env \
      -w /tf \
      -v $(pwd):/tf \
      --ulimit memlock=-1 \
      "${tf_image}:${tf_tag}" /tf/terraform.sh'
    
    export GITLAB_REPOSITORY_URL=$(curl -sH "Authorization: Bearer $(commodore fetch-token)" ${COMMODORE_API_URL}/clusters/${CLUSTER_ID} | jq -r '.gitRepo.url' | sed 's|ssh://||; s|/|:|')
    export GITLAB_REPOSITORY_NAME=${GITLAB_REPOSITORY_URL##*/}
    export GITLAB_CATALOG_PROJECT_ID=$(curl -sH "Authorization: Bearer ${GITLAB_TOKEN}" "https://git.vshn.net/api/v4/projects?simple=true&search=${GITLAB_REPOSITORY_NAME/.git}" | jq -r ".[] | select(.ssh_url_to_repo == \"${GITLAB_REPOSITORY_URL}\") | .id")
    export GITLAB_STATE_URL="https://git.vshn.net/api/v4/projects/${GITLAB_CATALOG_PROJECT_ID}/terraform/state/cluster"
    
    pushd catalog/manifests/openshift4-terraform/
    Initialize Terraform
    terraform init \
      "-backend-config=address=${GITLAB_STATE_URL}" \
      "-backend-config=lock_address=${GITLAB_STATE_URL}/lock" \
      "-backend-config=unlock_address=${GITLAB_STATE_URL}/lock" \
      "-backend-config=username=${GITLAB_USER}" \
      "-backend-config=password=${GITLAB_TOKEN}" \
      "-backend-config=lock_method=POST" \
      "-backend-config=unlock_method=DELETE" \
      "-backend-config=retry_wait_min=5"

Add node

  1. Run Terraform to spin up a new node

    terraform apply
  2. Approve node cert for new storage node

    # Once CSRs in state Pending show up, approve them
    # Needs to be run twice, two CSRs for each node need to be approved
    
    kubectl --as=cluster-admin get csr -w
    
    oc --as=cluster-admin get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | \
      xargs oc --as=cluster-admin adm certificate approve
    
    kubectl --as=cluster-admin get nodes
  3. Label and taint the new storage node

    kubectl get node -ojson | \
      jq -r '.items[] | select(.metadata.name | test("storage-")).metadata.name' | \
      xargs -I {} kubectl --as=cluster-admin label node {} node-role.kubernetes.io/storage=
    
    kubectl --as=cluster-admin taint node -lnode-role.kubernetes.io/storage \
      storagenode=True:NoSchedule
  4. Wait until the new OSD is launched. This requires ArgoCD to have run and the Rook-Ceph operator to notice the change. This might take a few minutes.

    kubectl --as=cluster-admin -n syn-rook-ceph-cluster get pods -w
  5. Wait for the data to be redistributed ("backfilled") to the new OSD.

    When backfilling is completed, ceph status should show all PGs as active+clean.
    Depending on the number of OSDs in the storage cluster and the amount of data that needs to be moved, this may take a while.

    If the storage cluster is mostly idle, you can speed up backfilling by temporarily setting the following configuration.

    kubectl --as=cluster-admin -n syn-rook-ceph-cluster exec -it deploy/rook-ceph-tools -- \
      ceph config set osd osd_mclock_override_recovery_settings true (1)
    kubectl --as=cluster-admin -n syn-rook-ceph-cluster exec -it deploy/rook-ceph-tools -- \
      ceph config set osd osd_max_backfills 10 (2)
    1 Allow overwriting osd_max_backfills.
    2 The number of PGs which are allowed to backfill in parallel. Adjust up or down depending on client load on the storage cluster.

    After backfilling is completed, you can remove the configuration with

    kubectl --as=cluster-admin -n syn-rook-ceph-cluster exec -it deploy/rook-ceph-tools -- \
      ceph config rm osd osd_max_backfills
    kubectl --as=cluster-admin -n syn-rook-ceph-cluster exec -it deploy/rook-ceph-tools -- \
      ceph config rm osd osd_mclock_override_recovery_settings
    kubectl --as=cluster-admin -n syn-rook-ceph-cluster exec -it deploy/rook-ceph-tools -- \
      ceph status

Finish up

  1. Remove silence in Alertmanager

    if [[ "$OSTYPE" == "darwin"* ]]; then alias date=gdate; fi
    job_name=$(printf "DELETE-silence-rook-ceph-alerts-$(date +%s)" | tr '[:upper:]' '[:lower:]')
    kubectl --as=cluster-admin -n openshift-monitoring create -f- <<EOJ
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: ${job_name}
      labels:
        app: silence-rook-ceph-alerts
    spec:
     backoffLimit: 0
     template:
      spec:
        restartPolicy: Never
        containers:
          - name: silence
            image: quay.io/appuio/oc:v4.13
            command:
            - bash
            - -c
            - |
              curl_opts=( --cacert /etc/ssl/certs/serving-certs/service-ca.crt --header "Content-Type: application/json" --header "Authorization: Bearer \$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --resolve alertmanager-main.openshift-monitoring.svc.cluster.local:9095:\$(getent hosts alertmanager-operated.openshift-monitoring.svc.cluster.local | awk '{print \$1}' | head -n 1) --silent )
    
              curl "\${curl_opts[@]}" \
                "https://alertmanager-main.openshift-monitoring.svc.cluster.local:9095/api/v2/silence/${silence_id}" \
                -XDELETE
    
            volumeMounts:
            - mountPath: /etc/ssl/certs/serving-certs/
              name: ca-bundle
              readOnly: true
            - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
              name: kube-api-access
              readOnly: true
        serviceAccountName: prometheus-k8s
        volumes:
        - name: ca-bundle
          configMap:
            defaultMode: 288
            name: serving-certs-ca-bundle
        - name: kube-api-access
          projected:
            defaultMode: 420
            sources:
              - serviceAccountToken:
                  expirationSeconds: 3607
                  path: 'token'
    EOJ
  2. Clean up Alertmanager silence jobs

    kubectl --as=cluster-admin -n openshift-monitoring delete jobs -l app=silence-rook-ceph-alerts