Cloudscale

"Split" and "Combined" Architectures

In this document there are references to two separate architectures, "Split" and "Combined." The latter has taken precedence over the former in the latest clusters. There are still clusters built using the "Split" architecture, based on Ansible, which was the tool used to configure OpenShift clusters in the past.

The following diagram shows the "Combined" architecture of the clusters setup in Cloudscale.

cloudscale

The setup of clusters in Cloudscale follow the general guidelines of other OpenShift installations, with the following caveats.

Network

The network is divided into two main sections:

  1. WWW (publicly available)

  2. LAN (not available publicly)

The network is built upon Cloudscale’s official virtual IP infrastructure.

The internal VIP of the Master LB is a level 2 network VIP without special routing required.

Load Balancers

There are two load balancers:

  1. Master LB

  2. App LB

Master LB

This load balancer handles the external "admin" traffic: Kubernetes API, Openshift API, etc.

Internally, this LB also handles all API traffic from worker nodes. This means that individual nodes don’t connect to the masters directly, but rather resolve the internal IP of the Master LB.

This load balancer is required because Cloudscale doesn’t offer "load balancer as a service."

It’s made of the following components:

  1. nginx for TLS.

  2. Keepalived to ensure the VIP is always attached to one of the Master LB instances.

    This allows for maintenance without downtime.
  3. iptables for traffic control.

App LB

The App Load Balancer manages the usual traffic to the applications running in the cluster. Some characteristics:

  • Regular OpenShift nodes by function.

  • Configured specially for this function.

  • These nodes have special labels.

  • Only certain workloads are configured there:

    • Application router.

    • Sideckicks - a keepalived.

By customer request, the AppLB nodes can become "compute" nodes with customer loads, but this is actively discouraged.

Default Gateway

The Default Gateway is attached as a VIP to one of the Master LB machines. Any traffic from inside the cluster to the outside world goes through the default Gateway.

In some clusters, the Default Gateway is in the secondary Infra VM named "Infra 1," while Ansible is in "Infra 2." Please refer to the documentation of these nodes later in this document.

GlusterFS

There might be GlusterFS in Cloudscale clusters, usually implemented as three VMs, but this is optional, and will described in a separate document.

Operations Traffic

In the "Split" architecture, SSH traffic would go directly to Infra2 (Ansible). Instead, it now goes directly to "Infra" in the "Combined" architecture.

"Infra 1" and "Infra 2"

Clusters older than 5 years old usually have these two infrastructure nodes, called "Infra 1" and "Infra 2":

  1. Ansible

  2. Icinga Satellite (regular VSHN Icinga Satellite)

The reason for these two "Infra" boxes was because Ansible used to be installed in a separate VM. Newer clusters don’t have these nodes anymore.

Examples

The "Split architecture" is still used, for example, in some certain APPUIO public clusters:

  • For historical reasons, some clusters are a mixture of both architectures. Whenever possible, they should be migrated to the "Combined" architecture.

  • Others use the "Combined" architecture, yet completely hidden behind layers of firewalls (for security reasons.) These clusters in particular could serve as an example of a core architecture, that could be expanded in creative ways.

The version of OpenShift doesn’t matter in the discussion between Split and "Combined" architectures.