As a cloud administrator, if your environment has no Internet connectivity, you provide a local content library where you manually upload Tanzu Kubernetes releases (TKr) and associate it with the Supervisor.
Deploying NVIDIA-aware AI workloads on TKG clusters requires the use of the Ubuntu edition of Tanzu Kubernetes releases.
Caution: The TKr content library is used across all vSphere namespaces in the Supervisor when you provision new TKG clusters.
Prerequisites
As a cloud administrator, verify that VMware Private AI Foundation with NVIDIA is deployed and configured. See Deploying VMware Private AI Foundation with NVIDIA
Procedure
- Download the Ubuntu-based TKr images with the required Kubernetes versions from https://wp-content.vmware.com/v2/latest/.
- Log in to the vCenter Server instance for the VI workload domain at http://<vcenter_server_fqdn>/ui.
- Select Create. and click
- Create a local content library and import the TKr images there.
See Create a Local Content Library (for Air-Gapped Cluster Provisioning).
- Add the content library to the Supervisor.
- Select .
- Navigate to the Supervisor for AI workloads.
- On the Configure tab, select General.
- Next to the Tanzu Kubernetes Grid Service property, click Edit.
- On the General page that appears, expand Tanzu Kubernetes Grid Service, and next to Content Library, click Edit.
- Select the content library with the TKr images and click OK.