Add a cluster to APPUiO Cloud
This guide describes the steps required to turn an OpenShift 4 cluster into an APPUiO Zone.
VSHN employees should follow Activate Zone instead of continuing this guide. |
Configure Keycloak
-
Create a new Keycloak Client with the following settings (leave the others at default value)
Client ID = appuio_<c-cluster-id> (1) Access Type = confidential Valid Redirect URIs = https://oauth-openshift.apps.cluster-id.tld/oauth2callback/APPUiO Base URL = https://console.cluster-id.tld/ [ Authentication Flow Overrides ] Browser Flow = APPUiO Browser RBAC (2)
1 For each enabled APPUiO Zone there shall be its own client using the cluster ID and the prefix appuio_
as name.2 See Setup RBAC
VSHN uses id.vshn.net as the IDP. |
Configure openshift4-authentication
-
After adding the cluster as a new client to APPUiO IdP, add the client secret to Vault. The value is being displayed in a grey box in the "Credentials" tab from the Keycloak client settings.
-
Add component configuration
parameters: openshift4_authentication: secrets: appuio-cloud-keycloak: clientSecret: '?{vaultkv:${cluster:tenant}/${cluster:name}/oidc/appuio-keycloak/clientSecret}' (1) identityProviders: appuio_keycloak: name: APPUiO-Cloud type: OpenID mappingMethod: add openID: issuer: https://id.appuio.cloud/auth/realms/appuio-cloud clientID: ${cluster:name} clientSecret: name: appuio-cloud-keycloak claims: (2) preferredUsername: - preferred_username name: - name email: - email
1 The Vault path for client secret 2 See also User object names in the OpenShift cluster
Configure group-sync-operator
The group-sync-operator is required to sync Keycloak groups to each APPUiO Zone.
-
Add component configuration
parameters: group_sync_operator: secrets: sync-appuio-keycloak-groups: stringData: username: '?{vaultkv:${cluster:tenant}/${cluster:name}/oidc/appuio-keycloak-sync/username}' password: '?{vaultkv:${cluster:tenant}/${cluster:name}/oidc/appuio-keycloak-sync/password}' sync: sync-keycloak-groups: schedule: '* * * * *' providers: keycloak: keycloak: url: https://id.appuio.cloud credentialsSecret: name: sync-appuio-keycloak-groups loginRealm: master realm: appuio-cloud scope: sub
Configure keycloak-attribute-sync-controller
The keycloak-attribute-sync-controller is required to sync Keycloak user attributes to each APPUiO Zone.
-
Add component configuration
parameters: keycloak_attribute_sync_controller: sync_credentials: sync-default-org-credentials: stringData: username: '?{vaultkv:${cluster:tenant}/${cluster:name}/oidc/appuio-keycloak-sync/username}' (1) password: '?{vaultkv:${cluster:tenant}/${cluster:name}/oidc/appuio-keycloak-sync/password}' sync_configurations: sync-default-org: url: https://id.appuio.cloud loginRealm: master credentialsSecret: name: sync-default-org-credentials realm: appuio-cloud attribute: appuio.io/default-organization targetAnnotation: appuio.io/default-organization schedule: '@every 1m'
1 The user for syncing attributes is the same as the one used for group-sync-operator. -
Compile and push the cluster catalog
-
Wait for Argo CD to sync the config
Configure metering metrics collection
We collect a small set of metrics from each APPUiO Zone for metering. This is done by deploying a Thanos Receiver on each zone and configuring the zone’s cluster monitoring to send selected metrics to the Thanos Receiver via Prometheus’s remote write feature.
-
Create an S3 bucket which can be accessed using the same credentials as the cluster’s registry bucket. Use name
${CLUSTER_ID}-appuio-metering
for the bucket.
-
Add configuration for components
openshift4-monitoring
andthanos
applications: - thanos as appuio-metering-thanos-receiver parameters: openshift4_monitoring: configs: prometheusK8s: remoteWrite: - name: appuio-metering url: http://thanos-receive.appuio-metering-thanos-receiver.svc:19291/api/v1/receive headers: THANOS-TENANT: '${cluster:name}' (1) writeRelabelConfigs: - sourceLabels: ['__name__'] regex: '(container_memory_usage_bytes|kube_pod_container_resource_requests|kube_persistentvolumeclaim_resource_requests_storage_bytes)' (2) action: keep appuio_metering_thanos_receiver: namespace: appuio-metering-thanos-receiver cluster_kubernetes_version: '${dynamic_facts:kubernetesVersion:major}.${dynamic_facts:kubernetesVersion:minor}' (3) commonConfig: volumeClaimTemplate: spec: storageClassName: ssd (4) # Override default security context for all components securityContext: {} objectStorageConfig: type: S3 config: bucket: '${cluster:name}-appuio-metering' (5) endpoint: objects.lpg.cloudscale.ch (6) region: lpg (6) access_key: '?{vaultkv:${cluster:tenant}/${cluster:name}/cloudscale/s3_access_key}' (7) secret_key: '?{vaultkv:${cluster:tenant}/${cluster:name}/cloudscale/s3_secret_key}' (7) query: enabled: false # we don't need the querier on the zones receive: enabled: true replicas: 2
1 Send extra THANOS-TENANT
header so that Thanos Receiver sets a usefultenant_id
label on the received metrics.2 The set of metrics to collect (as an RE2 regex) 3 The cluster’s Kubernetes version. The example makes use of the kubernetesVersion
dynamic fact reported by Steward on the cluster.4 The storage class to use for the PVCs for the Thanos Receiver. We highly recommend using a storage class suitable for database workloads. 5 The name of the bucket created in the previous step 6 The S3 endpoint and region for the bucket Adjust for zones which are not on cloudscale.ch in region LPG. 7 The S3 credentials for the bucket. Adjust the secret references so they point to the Vault secret that’s used for the cluster’s registry bucket.
-
Compile and push the cluster catalog
-
Wait for ArgoCD to sync the changes
Configure Appcat ObjectStorage provider
Please refer to the dedicated page to setup AppCat ObjectStorage
Add Cluster to Status Page
-
Login to Statuspal
-
Select
APPUiO Cloud
status pageYou need Admin
rights on the "APPUiO Cloud" status page to be able to add new services. -
Create a service for the APPUiO Zone
-
Create the following other services and select the previously created APPUiO Zone service as their parent:
-
OpenShift Console
-
Description: URL to the OpenShift Console
-
Select
Statuspal monitoring
-
Method:
HEAD
-
Ping url: URL to the OpenShift Console
-
Check
Automatically create incident
-
Check
Pause monitoring during maintenances
-
-
OpenShift / Kubernetes API
-
Description: URL to the OpenShift / Kubernetes API
-
Select
Statuspal monitoring
-
Method:
GET
-
Ping url: URL to the OpenShift / Kubernetes API +
/healthz
-
Check
Automatically create incident
-
Check
Pause monitoring during maintenances
-
-
Image Registry
-
Description: URL to the Image Registry
-
Select
Statuspal monitoring
-
Method:
HEAD
-
Ping url: URL to the Image Registry
-
Check
Automatically create incident
-
Check
Pause monitoring during maintenances
-
-
Logging
-
Description: URL to the Logging
-
Select
Statuspal monitoring
-
Method:
HEAD
-
Ping url: URL to the Logging +
/app/kibana
-
Check
Automatically create incident
-
Check
Pause monitoring during maintenances
-
-
Networking
-
Ingress
-