Limitations on Exoscale
Doing a UPI on Exoscale as documented here comes with some limitations which are outlined in this document.
One of the network topology requirements for a UPI installation is a load balancer which is accessible internally to the cluster. This load balancer is used to access both the Kubernetes API and the ignition endpoint.
There are several managed offerings that could be considered to implement this on Exoscale.
Using a managed EIP would allow to configure health checks and therefore automate the failover of one master to the other.
Instances sharing a common EIP can’t communicate with each other by using the EIP address
Therefore we can’t use them because the master nodes wouldn’t be able to communicate via the EIP and this would prevent pods running on them from accessing the API.
To bring traffic into the cluster, a load balancer for the ingress router pods is required. It needs to be accessible both internally and externally of the cluster. This again prevents us from using a managed EIP as described in Managed Elastic IPs.
Using an NLB could be a solution here, given that we can use an instance pool for worker nodes.
An instance pool can only be used if the configuration of all the instances is the same.
Currently we use a custom ignition config for each instance to set the hostname (writing it to
/etc/hostname) because the Exoscale DHCP server doesn’t set one.
This is required since the hostname is used as the name of the node the kubelet bootstraps.
We therefore can’t use instance pools since all the nodes would have the same name.
Both managed EIPs and NLBs don’t support HTTPS for health checks. This prevents us from using them for either the K8s API, ignition config or ingress router load balancers. A feature request was placed and Exoscale is working on it (no ETA).
Since private networks can only be configured in addition to the public IP, nodes with only a private IP can’t be created.