The following diagram shows the architecture of the clusters setup in Exoscale.


The setup of clusters in Exoscale follow the general guidelines of other OpenShift installations, with the following caveats.


These are the main networking characteristics of Exoscale:

  • In Exoscale, each virtual machine always has a public IP address.

  • The incoming traffic is the same as in the general case.

  • There is no way for the outside world to access directly any of the internal nodes thanks to security groups (see below.)

  • The SSH traffic always goes through the infra node.

Security Groups

Security groups are similar to AWS Security Groups, or to externalized firewalls. The following groups are defined:

  • openshift_cluster

  • openshift_infra

  • openshift_master

  • openshift_nodes

  • openshift_gluster

  • openshift_masterlb

These groups regulate traffic, by establishing rules about who can communicate with whom.

Groups assignments to VMs:

  • All VMs belong to the openshift_cluster group.

  • Masters also belong to openshift_nodes.

  • Application load balancers are considered nodes.

  • Other groups belong to their corresponding roles.

Load Balancers

Exoscale doesn’t offer load balancers as a service. For load balancing the traffic to the OpenShift API/console and the applications hosted on OpenShift, a pair of load balancers are operated within VMs. They use virtual IPs shared among each LB pair. The domain names are supposed points to the virtual IPs.

Private networks

They’re only there for VRRP traffic between the public-facing nodes

  • Private network is in layer 3.

  • Public network is on layer 4.

VRRP traffic can’t be routed.