If you plan to support DevOps engineers in using the kubectl command line tool to deploy a deep learning VM or an GPU-accelerated TKG cluster on a Supervisor, create and configure a vSphere namespace.

VMware Aria Automation creates a namespace every time a deep learning VM is provisioned, automatically adding the content library to it.

Note: This documentation is based on VMware Cloud Foundation 5.2.1. For information on the VMware Private AI Foundation with NVIDIA functionality in VMware Cloud Foundation 5.2, see VMware Private AI Foundation with NVIDIA Guide for VMware Cloud Foundation 5.2.

Prerequisites

Procedure

  1. For a VMware Cloud Foundation 5.2.1 instance, log in to the vCenter Server instance for the VI workload domain at https://<vcenter_server_fqdn>/ui as [email protected].
  2. In the vSphere Client side panel, click Workload Management.
  3. On the Workload Management page, create the vSphere namespace, and add resource limits, storage policy, permissions for DevOps engineers, and associate the vGPU-based VM classes with it.
  4. Add the content library for deep learning with deep learning VM images to the vSphere namespace for AI workloads.
    1. In the namespace for AI workloads, on the VM Service card for the namespace, click Manage content libraries.
    2. Select the content library with the deep learning VM images and click OK.