Add a Servala Control Cluster

The Servala Control Cluster (aka control-plane) runs in a vcluster on one of the CSP’s service clusters, usually on the first provisioned service cluster. Setting up a new control plane will involve configurations on the service cluster as well as on the vcluster itself.

Setup the vcluster on the service cluster

This configuration has to be added to the service cluster, in order for a vcluster to be provisioned.

The base service cluster configuration also needs to be applied as well.

Register cluster

Register the new cluster on the control panel.

ID: See ADR Sales Order: Check here. SLA: Best Effort Access Policy: regular Release Channel: stable Distro: k3d Provider: $CSP Region: $Region Rancher Cluster ID: none

Save the steward TokenURL for later.

Ensure all necessary credentials are in Vault

Depending on the CSP, additional credentials have to be added to vault.

Setup DNS

The hostname that is specified in the next section needs to be added to the DNS config

It’s a CNAME entry to the service cluster’s ingress host.

Check out the ADR to figure out the name of the control plane.

Syn configuration

Serivce cluster syn configuration
classes:
  - .vcluster-host

applications:
  - vcluster as servala-control-plane

parameters:
  components:
    vcluster:
      version: v2.0.0

  servala_control_plane:
    k3s:
      additional_args:
      - --kube-apiserver-arg=oidc-client-id=appuio-managed_$clusterID (1)
    ingress:
      host: api.[csp]-[region]-[stage][counter].control.servala.com (2)
    ocp_route:
      host: api.[csp]-[region]-[stage][counter].control.servala.com (2)
    syn:
      registration_url: https://api.syn.vshn.net/install/steward.json?token=secret (3)
1 The ID is autogenerated, it’s appuio-managed_+clusterID.
2 The API endpoint for the vcluster. Refere to the ADR for more details.
3 The generated steward URL from control.vshn.net

Vcluster configuration

All the configuration in this chapter has to be done on the vcluster itself.

Make sure ArgoCD bootstraps properly

After the vcluster bootstraps, it’s very likely that ArgoCD hangs reconfiguring itself. It’s not able to apply some CRDs. To fix this connect to the vcluster, by port-forwarding the service servala-control-plane in the namespace servala-control-plane. A kubeconfig can be found in the same namespace in the secret vc-servala-control-plane.

Then clone the rendered cluster manifests from git.vshn.net. Apply ArgoCD’s manifests again with server-side-apply.

Fix stuck ArgoCD bootstrapping
kubectl apply -Rf manifests/argocd --server-side

Connect to the vcluster’s ArgoCD instance to check if the sync now runs through. Afterward all the other apps should also start to sync.

Create service cluster kube configs

Create all the kubeconfigs for all the service clusters that will be managed by this control plane.

The followin script assumes, that you use OpenShift4 Access. The following script assumes, that you use OpenShift4 Access.

Generate service cluster kubeconfigs
TOKEN=$(ka -n syn-appcat get secrets appcat-control-plane -ogo-template='{{.data.token|base64decode}}')
export KUBECONFIG=service.kubeconfig
source .connection_facts
oc login --server=$API --token=$TOKEN
unset KUBECONFIG

After that add the kubeconfigs in an appropriate location in vault.

Syn configuration

This syn configuration should be applied to the newly generate cluster config in the tenant repository.

Full syn configuration for the control plane
classes:
  - t-servala.common
  - .vcluster (1)
  - global.apps.prometheus
  - .monitoring.alertmanager (1)

applications:
  - appcat
  - cert-manager
  - rbac

parameters:

  prometheus:
    instances:
      infra:
        prometheus:
          config:
            externalLabels:
              cluster_name: controlClusterID
  appcat:
    providers:
      cloudscale:
        enabled: true
    clusterManagementSystem:
      serviceClusterKubeconfigs:
        - name: cluster1
          config: ?{vaultkv:${cluster:tenant}/${cluster:name}/kubeconfigs/cluster1} (2)
      generic: (3)
        objectstorage:
          defaultComposition: cloudscale
          compositions:
            exoscale:
              enabled: false
            cloudscale:
              enabled: true
1 Contains general control plane configuration
2 Add all previously generated kubeconfigs
3 This will depend on the CSP.