Migrate to Cilium CNI

Prerequisites

  • cluster-admin privileges

  • kubectl

  • jq

  • curl

  • Working commodore command

Prepare for migration

Make sure that your $KUBECONFIG points to the cluster you want to migrate before starting.
  1. Create alertmanager silence

    silence_id=$(
        kubectl --as=cluster-admin -n openshift-monitoring exec \
        sts/alertmanager-main -- amtool --alertmanager.url=http://localhost:9093 \
        silence add alertname!=Watchdog --duration="3h" -c "cilium migration"
    )
    echo $silence_id
  2. Select cluster

    export CLUSTER_ID=c-cluster-id-1234 (1)
    export COMMODORE_API_URL=https://api.syn.vshn.net (2)
    export TENANT_ID=$(curl -sH "Authorization: Bearer $(commodore fetch-token)" \
      "${COMMODORE_API_URL}/clusters/${CLUSTER_ID}" | jq -r '.tenant')
    1 Replace with the Project Syn cluster ID of the cluster to migrate
    2 Replace with the Lieutenant API on which the cluster is registered
  3. Disable ArgoCD auto sync for components openshift4-nodes and openshift-upgrade-controller

    kubectl --as=cluster-admin -n syn patch apps root --type=json \
      -p '[{"op":"replace", "path":"/spec/syncPolicy", "value": {}}]'
    kubectl --as=cluster-admin -n syn patch apps openshift4-nodes --type=json \
      -p '[{"op":"replace", "path":"/spec/syncPolicy", "value": {}}]'
    kubectl --as=cluster-admin -n syn patch apps openshift-upgrade-controller --type=json \
      -p '[{"op":"replace", "path":"/spec/syncPolicy", "value": {}}]'
  4. Disable the cluster-network-operator. This is necessary to ensure that we can migrate to Cilium without the cluster-network-operator trying to interfere. We also need to scale down the upgrade controller, so that we can patch the ClusterVersion object.

    kubectl --as=cluster-admin -n appuio-openshift-upgrade-controller \
      scale deployment openshift-upgrade-controller-controller-manager --replicas=0
    kubectl --as=cluster-admin patch clusterversion version \
      --type=merge \
      -p '
      {"spec":{"overrides":[
        {
          "kind": "Deployment",
          "group": "apps",
          "name": "network-operator",
          "namespace": "openshift-network-operator",
          "unmanaged": true
        }
      ]}}'
    kubectl --as=cluster-admin -n openshift-network-operator \
      scale deploy network-operator --replicas=0
  5. Verify that the network operator has been scaled down.

    kubectl -n openshift-network-operator get pods (1)
    1 This should return No resources found in openshift-network-operator namespace.

    If the operator is still running, check the following conditions:

    • The APPUiO OpenShift upgrade controller must be scaled down.

    • The ClusterVersion object must have an override to make the network operator deployment unmanaged.

  6. Remove network operator applied state

    kubectl --as=cluster-admin -n openshift-network-operator \
      delete configmap applied-cluster
  7. Pause all machine config pools

    for mcp in $(kubectl get mcp -o name); do
    kubectl --as=cluster-admin patch $mcp --type=merge -p '{"spec": {"paused": true}}'
    done

Migrate to Cilium

  1. Get local cluster working directory

    commodore catalog compile "$CLUSTER_ID" (1)
    1 We recommend switching to an empty directory to run this command. Alternatively, switch to your existing directory for the cluster.
  2. Enable component cilium

    pushd inventory/classes/"${TENANT_ID}"
    yq -i '.applications += "cilium"' "${CLUSTER_ID}.yml"
  3. Update upstreamRules for monitoring

    yq -i ".parameters.openshift4_monitoring.upstreamRules.networkPlugin = \"cilium\"" \
      "${CLUSTER_ID}.yml"
  4. Update component networkpolicy config

    yq eval -i '.parameters.networkpolicy.networkPlugin = "cilium"' \
      "${CLUSTER_ID}.yml"
    yq eval -i '.parameters.networkpolicy.ignoredNamespaces = ["openshift-oauth-apiserver"]' \
      "${CLUSTER_ID}.yml"
  5. Configure component cilium. We explicitly configure the K8s API endpoint to ensure that the Cilium operator doesn’t access the API through the cluster network during the migration.

    When running Cilium with kubeProxyReplacement=partial, the API endpoint configuration can be removed after the migration is completed.
    Explicitly configure the K8s API endpoint
    yq -i '.parameters.cilium.cilium_helm_values.k8sServiceHost="api-int.${openshift:baseDomain}"' \
      "${CLUSTER_ID}.yml" (1)
    yq -i '.parameters.cilium.cilium_helm_values.k8sServicePort="6443"' \
      "${CLUSTER_ID}.yml"
    1 On vSphere clusters, you may need to use api.${openshift:baseDomain}.
    Configure the cluster Pod and Service CIDRs
    POD_CIDR=$(kubectl get network.config cluster \
      -o jsonpath='{.spec.clusterNetwork[0].cidr}')
    HOST_PREFIX=$(kubectl get network.config cluster \
      -o jsonpath='{.spec.clusterNetwork[0].hostPrefix}')
    
    yq -i '.parameters.cilium.cilium_helm_values.ipam.operator.clusterPoolIPv4MaskSize = "'"${HOST_PREFIX}"'"' \
      "${CLUSTER_ID}.yml"
    yq -i '.parameters.cilium.cilium_helm_values.ipam.operator.clusterPoolIPv4PodCIDR = "'"${POD_CIDR}"'"' \
      "${CLUSTER_ID}.yml"
  6. Commit changes

    git commit -am "Migrate ${CLUSTER_ID} to Cilium"
    git push origin master
    popd
  7. Compile catalog

    commodore catalog compile "${CLUSTER_ID}"
  8. Patch cluster network config

    kubectl --as=cluster-admin patch network.config cluster \
      --type=merge -p '{"spec":{"networkType":"Cilium"},"status":null}'
    kubectl --as=cluster-admin patch network.operator cluster \
      --type=merge -p '{"spec":{"defaultNetwork":{"type":"Cilium"}},"status":null}'
  9. Apply Cilium manifests. We need to execute the apply twice, since the first apply will fail to create the CiliumConfig resource.

    kubectl --as=cluster-admin apply -Rf catalog/manifests/cilium/
    kubectl --as=cluster-admin apply -Rf catalog/manifests/cilium/
  10. Wait until Cilium CNI is up and running

    kubectl -n cilium get pods -w

Finalize migration

  1. Re-enable cluster network operator

    This will remove the previously active CNI plugin and will deploy the kube-proxy daemonset. As soon as you complete this step, existing pods may go into CrashLoopBackOff since they were started with CNI IPs managed by the old network plugin.

    kubectl --as=cluster-admin -n openshift-network-operator \
      scale deployment network-operator --replicas=1
    kubectl --as=cluster-admin patch clusterversion version \
     --type=merge -p '{"spec":{"overrides":null}}'
  2. Unpause MCPs

    for mcp in $(kubectl get mcp -o name); do
    kubectl --as=cluster-admin patch $mcp --type=merge -p '{"spec":{"paused":false}}'
    done

    You may need to grab the cluster-admin credentials to complete this step since the OpenShift OAuth components may be unavailable until they’re restarted with Cilium-managed IPs.

    It may be necessary to force drain nodes manually to allow the machine-config-operator to reboot the nodes. Use kubectl --as=cluster-admin drain --ignore-daemonsets --delete-emptydir-data --force --disable-eviction to circumvent PDB violations if necessary.

    Start with a master node, and ensure that the machine-config-operator is running on that master node after it’s been drained and rebooted.

  3. Compile and push catalog

    commodore catalog compile "${CLUSTER_ID}" --push
  4. Re-enable ArgoCD auto sync

    kubectl --as=cluster-admin -n syn patch apps root --type=json \
      -p '[{
        "op":"replace",
        "path":"/spec/syncPolicy",
        "value": {"automated": {"prune": true, "selfHeal": true}}
      }]'

Cleanup alert silence

  1. Expire alertmanager silence

    kubectl --as=cluster-admin -n openshift-monitoring exec sts/alertmanager-main --\
        amtool --alertmanager.url=http://localhost:9093 silence expire $silence_id