OpenShift is a family of containerization software developed by Red Hat. Its flagship product is the OpenShift Container Platform—an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux. The family’s other products provide this platform through different environments: OKD serves as the community-driven upstream (akin to CentOS), OpenShift Online is the platform offered as software as a service, and Openshift Dedicated is the platform offered as a managed service.

— Wikipedia
Article on OpenShift 2019–06–20

OpenShift can be considered a distribution of Kubernetes. While Kubernetes provides an administrator just the building blocks to build a cluster, OpenShift brings along several solutions for common topics like security, networking, handling of ingress traffic, and much more.

Getting started with OpenShift

The following list contains links to guides/tutorials explaining how to work with OpenShift as a user.

An overview of all clusters managed by VSHN can be found at two locations. The view at the Control Panel lists all clusters along with the URL to the corresponding Web Console. Customers of VSHN with access to at least one OpenShift cluster will also be able to see this page.

For day to day operations and specially for conducting maintenance the overview at the VSHN wiki is used.

General architecture and architectural principles

To understand the general architecture of OpenShift, please refer to the official documentation by Red Hat.

Hosts and their roles

On the infrastructure level, a VSHN-managed OpenShift cluster has six roles for hosts: master load balancer, application load balancer, master, node, storage node, and infra node. A host can potentially serve in multiple roles.

Master Load Balancer (masterlb)

Our clusters have two nodes which serve as the master load balancers. These nodes aren’t part of the OpenShift cluster itself, but serve a support role by load-balancing traffic going to the OpenShift masters.

The masterlbs share a virtual IP address to accept external traffic to the OpenShift console and API and load balance this traffic to one of the OpenShift masters. For external traffic, SSL is terminated here. Let’s Encrypt can be used to automatically generate and renew appropriate certificates.

Additionally the masterlbs share a virtual IP address to accept internal traffic to the OpenShift API and load balance this traffic to the masters. SSL termination isn’t required here as the OpenShift nodes know and trust the masters certificates.

A third virtual IP address, which is configured as the default gateway on all other nodes, is also shared between the masterlbs.

The Let’s Encrypt support is implemented using Nginx. The SSL termination is implemented with HAProxy. The VIP sharing is implemented using Keepalived and iptables.

Master (master)

The masters are a set of three hosts running the OpenShift control plane. This includes the API server, the controller manager server and etcd. While etcd could be installed on a separate set of hosts, all VSHN-managed OpenShift clusters co-locate the control plane and etcd on the same set of hosts.

HAProxy isn’t present on the master nodes, as HAProxy for the control plane is running on the masterlbs in VSHN-managed OpenShift clusters.

Application Load Balancer (lb)

The load balancer nodes are two OpenShift compute nodes which are dedicated for the OpenShift router pods, which are responsible to route incoming traffic to the applications and services running on the cluster. Incoming traffic is accepted on two virtual IPs which are shared between the lbs with Keepalived. Some IaaS providers require to be informed of IP addresses being switched between virtual machines. For those cases we’ve developed a tool named Ursula which takes care of this. Ursula is also executed on these nodes.

Node (node)

Nodes provide the actual runtime and compute resources where user workload will be scheduled and executed on. The documentation for OpenShift 3.10 and 3.11 mentions the node group node-config-compute which is what OpenShift nodes are providing.

Clusters which are accessible to more than one customer ("public" or "shared" clusters), may also have dedicated nodes, which are dedicated to running a single customer’s workloads. These nodes are otherwise regular compute nodes which have an additional label identifying the customer. The customer label identifies the customer this node belongs to. If a customer has dedicated nodes, that customer’s workloads will only be scheduled on their dedicated nodes.

Infra (infra)

Each VSHN-managed cluster has at least one infra host which is a service host which we use to manage the cluster. The infra host provides one or more of the following functions:

  • SSH Jumphost to get shell access to a cluster’s nodes.

  • Hosts an Ansible inventory and all the Ansible code which is required to manage the cluster using OpenShift Ansible.

  • An Icinga Satellite to monitor the cluster.

Older clusters can have two infra nodes. In this case one infra node serves as the SSH jump host and Icinga satellite, while the other node serves as the Ansible master. Newer clusters have only one infra node providing all three functions.

The Red Hat OpenShift documentation also describes infra nodes. Those are nodes dedicated to run workload of hosted infrastructure, such as cluster logging or cluster metrics. Our setup doesn’t have such nodes. Workloads belonging to hosted infrastructure, such as cluster logging or cluster metrics, are scheduled on general compute nodes.

Storage (storage)

We prefer to consume storage from the IaaS. Some IaaS providers don’t have any storage API or OpenShift doesn’t have a compatible driver. In situations where it isn’t possible to consume storage from the IaaS, we provision a Gluster cluster with three nodes.


The networking setup for a VSHN-managed OpenShift cluster differs between IaaS providers. By default, all hosts will be attached to a public and a private network as shown in the following network diagram.