If you deploy management clusters and Tanzu Kubernetes clusters to vSphere, versions of Tanzu Kubernetes Grid prior to v1.2.0 required you to have deployed an HA Proxy API server load balancer OVA template, named photon-3-haproxy-v1.x.x-vmware.1.ova. Since version 1.2.0, new Tanzu Kubernetes clusters that you deploy use Kube-vip instead of HA Proxy as their internal load balancer, so HA Proxy is no longer required.

When you upgrade clusters that you deployed with Tanzu Kubernetes Grid version 1.0.x or v1.1.x to v1.2.x, you must perform manual steps to update the cluster so that it uses Kube-VIP as its load balancer rather than HA Proxy. You must perform these manual steps after you upgrade the clusters to 1.2.x. You must perform the steps on all management clusters, and on Tanzu Kubernetes clusters that implement High Availability by having more than one control plane node. Updating your clusters to use Kube-VIP rather than HA Proxy allows you to remove the HA Proxy VM from each cluster that you deployed with the older version, which reduces resource consumption.

IMPORTANT: HA Proxy is not supported in Tanzu Kubernetes Grid v1.3. Before you can upgrade from Tanzu Kubernetes Grid v1.2.x to v1.3, you must perform the steps in this procedure to migrate all clusters from HA Proxy to Kube-vip.

Prerequisites

  • You have deployed management clusters and HA-enabled Tanzu Kubernetes clusters to vSphere with Tanzu Kubernetes Grid version 1.0.x or 1.1.x.
  • You have followed the procedures in Upgrade Management Clusters and Upgrade Tanzu Kubernetes Clusters to upgrade the management clusters and Tanzu Kubernetes clusters to Tanzu Kubernetes Grid version 1.2.x.

Procedure

In all of the steps in this procedure, replace CLUSTER_NAME with the name of the cluster that you are updating. Make sure that you retain any machine name suffixes, such as *-tkg-system or *-control-plane in the commands.

If the cluster is running in a namespace other than the default namespace, you must specify the -n option to identify the namespace. In all commands, replace NAMESPACE with the namespace in which the cluster is running. If the cluster is running in the default namespace, you can omit the -n NAMESPACE option.

  1. Set the context of kubectl to the context of the management cluster or Tanzu Kubernetes cluster that you upgraded.

    kubectl config use-context CLUSTER_NAME-admin@CLUSTER_NAME
    
  2. Obtain the address of the existing HA Proxy VM for the cluster.

    kubectl get haproxyloadbalancer CLUSTER_NAME-tkg-system -n NAMESPACE -o template='{{.status.address}}'
    
  3. In a text editor, create a file named patch.yaml and paste the YAML below into it.

    Replace the <HAPROXY_IP> with the address of the HA Proxy load balancer VM that you obtained in the preceding step.

    spec:
      kubeadmConfigSpec:
        files:
        - content: |
            apiVersion: v1
            kind: Pod
            metadata:
              creationTimestamp: null
              name: kube-vip
              namespace: kube-system
            spec:
              containers:
              - args:
                - start
                env:
                - name: vip_arp
                  value: "true"
                - name: vip_leaderelection
                  value: "true"
                - name: vip_address
                  value: <HAPROXY_IP>
                - name: vip_interface
                  value: eth0
                - name: vip_leaseduration
                  value: "15"
                - name: vip_renewdeadline
                  value: "10"
                - name: vip_retryperiod
                  value: "2"
                image: registry.tkg.vmware.run/kube-vip:v0.1.8_vmware.1
                imagePullPolicy: IfNotPresent
                name: kube-vip
                resources: {}
                securityContext:
                  capabilities:
                    add:
                    - NET_ADMIN
                    - SYS_TIME
                volumeMounts:
                - mountPath: /etc/kubernetes/admin.conf
                  name: kubeconfig
              hostNetwork: true
              volumes:
              - hostPath:
                  path: /etc/kubernetes/admin.conf
                  type: FileOrCreate
                name: kubeconfig
            status: {}
          owner: root:root
          path: /etc/kubernetes/manifests/kube-vip.yaml
    

    NOTE: If you are performing this procedure in an Internet-restricted environment, you must update image: registry.tkg.vmware.run/kube-vip:v0.1.8_vmware.1 to image: your.private.registry.address/kube-vip:v0.1.8_vmware.1.

  4. Apply the patch.yaml file to the KubeadmControlPlane resource in the cluster.

    Make sure that you include *-control-plane in the cluster name.

    kubectl patch kcp CLUSTER_NAME-control-plane -n NAMESPACE --type merge  --patch "$(cat patch.yaml)"
    

    When you apply this patch, the kcp-controller-manager pod in the cluster detects a change and starts creating new machines with the updated specifications.

  5. Identify the most recently created machine and see its state.

    kubectl get ma | grep $(kubectl get ma --sort-by=.metadata.creationTimestamp -o jsonpath="{.items[-1:].metadata.name}")
    
  6. When the newly created control plane machine is in the Running state, delete the HA Proxy load balancer from the cluster.

    Make sure that you include *-tkg-system in the cluster name.

    kubectl delete haproxyloadbalancer CLUSTER_NAME-tkg-system -n NAMESPACE
    
  7. Edit the vsphereCluster object for the cluster.

    The following command opens the object in the default editor for your system.

    kubectl edit vspherecluster CLUSTER_NAME -n NAMESPACE
    
  8. Remove the loadBalancerRef reference and save the file.

    Remove the following lines from the file:

      loadBalancerRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
        kind: HAProxyLoadBalancer
        name: '${ CLUSTER_NAME }-${ NAMESPACE }'
    
  9. When all of the new VMs are in the Running state, on your DHCP server, make a static IP reservation for the control plane node that you patched.

    Kube-VIP requires a static address for the API server endpoint. Use an auto-generated MAC Address when you make the DHCP reservation for Kube-VIP, so that the DHCP server does not assign this IP to other machines. For instructions on how to configure DHCP reservations, see your DHCP server documentation.

Your upgraded clusters now use Kube-VIP as their API server load balancer.

check-circle-line exclamation-circle-line close-line
Scroll to top icon