As a cloud administrator, if your environment has no Internet connectivity, you provide a local content library where you manually upload Tanzu Kubernetes releases (TKr) and associate it with the Supervisor.

Deploying NVIDIA-aware AI workloads on TKG clusters requires the use of the Ubuntu edition of Tanzu Kubernetes releases.

Caution: The TKr content library is used across all vSphere namespaces in the Supervisor when you provision new TKG clusters.

Prerequisites

As a cloud administrator, verify that VMware Private AI Foundation with NVIDIA is deployed and configured. See Deploying VMware Private AI Foundation with NVIDIA

Procedure

  1. Download the Ubuntu-based TKr images with the required Kubernetes versions from https://wp-content.vmware.com/v2/latest/.
  2. Log in to the vCenter Server instance for the VI workload domain at http://<vcenter_server_fqdn>/ui.
  3. Select Menu > Content Libraries and click Create.
  4. Create a local content library and import the TKr images there.

    See Create a Local Content Library (for Air-Gapped Cluster Provisioning).

  5. Add the content library to the Supervisor.
    1. Select Menu > Workload Management.
    2. Navigate to the Supervisor for AI workloads.
    3. On the Configure tab, select General.
    4. Next to the Tanzu Kubernetes Grid Service property, click Edit.
    5. On the General page that appears, expand Tanzu Kubernetes Grid Service, and next to Content Library, click Edit.
    6. Select the content library with the TKr images and click OK.