Cluster configuration options on Exoscale

The OpenShift cluster setup can be customized by configuring .parameters.openshift4_terraform.terraform_variables. This page gives an overview of the configuration options. Further information on the Terraform module and its configuration can be found on GitHub.

Nodes

You have the ability to control the instance size, count, and partition size for each VM type as well as the option to configure additional worker groups through Terraform variables.

See the Exoscale pricing documentation for available instance sizes.

Master nodes

Key Default Description

master_count

3

The number of Kubernetes master nodes running the control plane.

master_state

Running

State of the Kubernetes master node VMs.

We don’t provide support for configurations with master_count != 3.

Infrastructure Nodes

Key Default Description

infra_count

3

The number of infrastructure nodes running the OpenShift Container Platform environment

infra_state

Running

State of the infrastructure node VMs

infra_size

Extra-large

Size of the infrastructure node VMs

We currently don’t provide support for any of the following configurations:

  • infra_size smaller than Extra-large.

  • infra_count < 3.

Storage Nodes

Key Default Description

storage_count

3

The number of storage nodes running Ceph

storage_state

Running

State of the storage node VMs

storage_size

CPU-extra-large

Size of the storage node VMs

storage_cluster_disk_size

180

Ceph cluster storage per node in GB

We currently don’t provide support for any of the following configurations:

  • storage_size smaller than CPU-extra-large.

  • storage_count < 3.

  • storage_cluster_disk_size < 180.

Worker Nodes

Key Default Description

worker_count

3

The number of worker nodes running the customers workload

worker_state

Running

State of the worker node VMs

worker_size

Extra-large

Size of the worker node VMs

worker_data_disk_size

0

Additional storage per worker node that can be used as local storage

We currently don’t provide support for any of the following configurations:

  • worker_size smaller than Extra-large.

  • worker_count < 3.

Additional Worker Groups

You have the option to add additional worker node groups. Each worker group is a collection of VMs with the same flavor and volume size.

Worker groups can be configured through the additional_worker_groups variable. This variable is a map from worker group names (used as node prefixes) to objects providing node instance size, node count, node data disk size, and node state.

The following example will add a worker group called cpu1 with 3 instances of size CPU-huge with a volume size of 248GB.

terraform_variables:
  additional_worker_groups:
    "cpu1":
      size: "CPU-huge"
      count: 3

Please note that you can’t use names master, infra, storage or worker for additional worker groups. We prohibit these names to ensure there are no collisions between the generated nodes names for different worker groups.

Key Default Description

count

``

(Required) The number of worker nodes in this group

state

Running

State of the worker node VMs

size

``

(Required) Size of the worker node VMs

data_disk_size

0

Additional storage per worker node that can be used as local storage

We currently don’t provide support for configurations with additional worker groups with size smaller than Extra-large.

Example

parameters:
  openshift4_terraform:
    terraform_variables:
      root_disk_size: 128
      worker_count: 4
      worker_size: "Medium"
      worker_volume_size_gb: 512
      additional_worker_groups:
        "cpu1":
          size: "CPU-huge"
          count: 3