If the ESXi server is running Tanzu Kubernetes Workload VMs configured with the SR-IOV network adapter or if the worker nodes cannot be migrated to another ESXi host, manually power OFF the worker node VMs and then upgrade ESXi.

To download ESXi, see the Telco Cloud Platform RAN Essentials 4.0 or 4.0.1 product downloads page.

Procedure

  1. For each ESXi host in the vSphere cluster, identify the list of Tanzu Kubernetes worker node VMs with SR-IOV network adapter.
    Note:

    Make a note of the names of the worker node VMs in the ESXi host.

  2. Log in to the TCA-CP VM as an 'admin' user and switch to a 'root' user.
  3. For each worker node VM with the SR-IOV network adapter, identify the Tanzu Kubernetes workload cluster that the worker node VM is part of and then drain the pods of the worker node VM:
    1. Open the TCA command line utility by running these commands in sequence:
      ccliPodName=`kubectl get pods -A | grep ccli | awk '{ print $2 }'`
      kubectl exec --stdin --tty $ccliPodName  -n tca-cp-cn -- ccli-shell
    2. List all the Tanzu Kubernetes management clusters in the bootstrapper.
      ccli>>list mc
      Note:

      Make a note of the index number of the management cluster (for example, MC1), where you want to look for the worker node VM.

    3. Go to the Tanzu Kubernetes management cluster MC1.
      ccli>>go <index number of the Tanzu Kubernetes management cluster>
    4. List the Tanzu Kubernetes workload clusters managed by the management cluster MC1.
      ccli>>list wc
      Note:

      Make a note of the index number of the workload cluster (for example, WC1), where you want to look for the worker node VM.

    5. List the names and namespaces of all workload clusters managed by the management cluster MC1.
      kubectl get clusters -A
      Note:

      Make a note of the name and namespace of the workload cluster WC1, where you want to look for the worker node VM.

    6. Go to the workload cluster WC1 managed by the management cluster MC1.
      ccli>>go <index number of Tanzu Kubernetes workload cluster>
    7. List all the worker node VMs in the workload cluster WC1 and identify whether the list shows the worker node VM that you are looking for.
      ccli>>list nodes
      • If the worker node VM that you are looking for is shown in the worker node list of the workload cluster WC1, skip to Step 3h.

      • If the worker node VM that you are looking for is not shown in the worker node list of the workload cluster WC1, repeat the previous Steps 3f and 3g for the remaining workload clusters managed by the management cluster MC1.

      • If the worker node VM that you are looking for is not shown in the worker node lists of all workload clusters managed by the management cluster MC1, repeat Steps 3c-3g for the remaining management clusters in the bootstrapper.

    8. After identifying the workload cluster that the worker node VM is part of, pause the health check of the worker node VM by running the following commands in sequence:
      ccli>>list mc
      ccli>>go <index number of the management cluster that the worker node VM belongs to>
      kubectl patch cluster <cluster_name> --type merge -p '{"spec": {"paused": true}}' -n <cluster_namespace>

      cluster_name specifies the name of the workload cluster that the worker node VM belongs to.

      cluster_namespace specifies the namespace of the workload cluster that the worker node VM belongs to.

    9. Drain the pods of the worker node VM by running the following commands in sequence:
      ccli>>list wc
      ccli>>go <index number of the Tanzu Kubernetes workload cluster that the worker node VM belongs to>
      kubectl drain <worker_node_name> --ignore-daemonsets --delete-local-data --force

      worker_node_name specifies the name of the worker node VM.

    10. Repeat Steps 3b-3i for the remaining worker node VMs that use the SR-IOV network adapter on the ESXi host.
  4. Power off all the worker node VMs that use the SR-IOV network adapter on the ESXi host.
  5. In Lifecycle Manager, apply the baseline to the ESXi host and remediate. For more information, see Remediating ESXi Hosts Against vSphere Lifecycle Manager Baselines.
    Note:

    If you are using vSphere LCM-enabled clusters, upgrade ESXi hosts by following the instructions in Upgrade a vSphere Lifecycle Manager-enabled Cluster.

  6. After the ESXi host remediation is completed, power ON all the worker node VMs that use the SR-IOV network adapter on the ESXi host.
  7. Uncordon each worker node VM by running the following commands in sequence:
    ccli>>list mc
    ccli>>go <index number of the management cluster>
    ccli>>list wc
    ccli>>go <index number of the workload cluster>
    kubectl get nodes
    kubectl uncordon <worker node name>
  8. Resume the health check of each worker node VM:
    kubectl patch cluster <cluster_name> --type merge -p '{"spec": {"paused": false}}' -n <cluster_namespace>

    cluster_name specifies the name of the workload cluster that the worker node VM belongs to.

    cluster_namespace specifies the namespace of the workload cluster that the worker node VM belongs to.

  9. Upgrade all the drivers and firmware on the ESXi server based on the Telco Cloud Platform RAN 4.0 BOM. For more information, see KB87936.
    Note:

    For instructions to upgrade ACC 100, see ACC 100 Support for ESXi Upgrade.

  10. Repeat Steps 1-9 for the remaining ESXi hosts in the vSphere cluster.

Results

The ESXi host is successfully upgraded and rebooted.