As a DevOps engineer, you can review available VM resources and provision a stand-alone VM in a namespace on a Supervisor. Use the kubectl command to perform the following tasks.


To be able to deploy a stand-alone VM in vSphere with Tanzu, a DevOps engineer must have access to specific VM resources. Make sure that a vSphere administrator has performed these steps to make VM resources available:

View VM Resources Available on a Namespace in vSphere with Tanzu

As a DevOps engineer, verify that you can access VM resources on your namespace, and view VM classes and VM templates available in your environment. You can also list storage classes and other items you might need to self-service a VM.

This task covers commands you use to access resources available for a deployment of a stand-alone VM. For information about resources necessary to deploy Tanzu Kubernetes Grid clusters and VMs that make up the clusters, see Virtual Machine Classes for TKG Clusters in the Using Tanzu Kubernetes Grid 2 with vSphere with Tanzu documentation.


  1. Access your namespace in the Kubernetes environment.
  2. To view VM classes available in your namespace, run the following command.
    kubectl get virtualmachineclassbindings
    You can see the following output.
    Note: Because the best effort VM class type allows resources to be overcommitted, you can run out of resources if you have set limits on the namespace where you are provisioning the VMs. For this reason, use the guaranteed VM class type in the production environment.
    NAME                       VIRTUALMACHINECLASS        AGE
    best-effort-large          best-effort-large          44m
    best-effort-medium         best-effort-medium         44m
    best-effort-small          best-effort-small          44m
    best-effort-xsmall         best-effort-xsmall         44m
    custom                     custom                     44m
  3. To view details of a specific VM class, run the following commands.
    • kubectl describe virtualmachineclasses name_vm_class

      If a VM class includes a vGPU device, you can see its profile under spec: hardware: devices: vgpuDevices.

          cpus: 4
            - profileName: grid_v100-q4
    • kubectl get virtualmachineclasses -o wide

      If the VM class includes a vGPU or a passthrough device, the output shows it in the VGPUDevicesProfileNames or PassthroughDeviceIDs column.

  4. View the VM images.
    kubectl get virtualmachineimages​
    The output you see is similar to the following.
    NAME                                              VERSION  OSTYPE                FORMAT  IMAGESUPPORTED  AGE
    centos-stream-8-vmservice-v1alpha1-xxxxxxxxxxxxx           centos8_64Guest       ovf     true            4d3h
  5. To describe a specific image, use the following command.
    kubectl describe virtualmachineimage/centos-stream-8-vmservice-v1alpha1-xxxxxxxxxxxxx

    VMs with vGPU devices require images that have boot mode set to EFI, such as CentOS. Make sure to have access to these images.

  6. Verify that you can access storage classes.
    kubectl get resourcequotas
    NAME                        AGE   REQUEST                                                                                         LIMIT
    my-ns-ubuntu-storagequota   24h 0/9223372036854775807
  7. If you are using vSphere Distributed Switch for your workload networking, obtain the name of the network.
    Note: You use this information to specify the networkName parameter in the VM YAML file when the networkType is vsphere-distributed. You do not need to obtain and specify the network name if you use VMware NSX.
    kubectl get network
    NAME      AGE
    primary   7d2h

Deploy a Virtual Machine in vSphere with Tanzu

As a DevOps engineer, provision a VM and its guest OS in a declarative manner by writing VM deployment specifications in a Kubernetes YAML file.


If you use NVIDIA vGPU or other PCI devices for your VMs, the following considerations apply:
  • Make sure to use appropriate VM class with PCI configuration. See Add PCI Devices to a VM Class in vSphere with Tanzu.
  • VMs with vGPU devices require images that have boot mode set to EFI, such as CentOS.
  • VMs with vGPU devices that are managed by VM Service are automatically powered off when an ESXi host enters maintenance mode. This might temporarily affect workloads running in the VMs. The VMs are automatically powered on after the host exists the maintenance mode.


  1. Prepare the VM YAML file.
    In the file, specify the following parameters:
    Option Description
    apiVersion Specifies the version of the VM Service API. Such as
    kind Specifies the type of Kubernetes resource to create. The only available value is VirtualMachine.
    spec.imageName Specifies the content library image the VM should use.
    spec.storageClass​ Identifies the storage class to be used for storage of the persistent volumes.
    spec.className Specifies the name of the VM class that describes the virtual hardware settings to be used.
    spec.networkInterfaces Specifies network-related settings for the VM.
    • networkType. Values for this key can be nsx-t or vsphere-distributed.
    • networkName. Specify the name only if networkType is vsphere-distributed. You can obtain this information using the kubectl get network command.

      If networkType is nsx-t, you do not need to indicate networkName.

    spec.vmMetadata Includes additional metadata to pass to the VM. You can use this key to customize the guest OS image and set such items as the hostname of the VM and user-data, including passwords, ssh keys, and so on. The example YAML below uses ConfigMap to store the metadata. Controls the VM placement on a three-zone Supervisor. For example, zone-a02.
    Use the following as an example of a YAML file ubuntu-impish-vm.yaml.
    kind: VirtualMachine
      name: ubuntu-impish-vm
      namespace: sr-1
      annotations: disable
      - networkName: ""
        networkType: nsx-t
      className: best-effort-medium
      imageName: ubuntu-impish-21.10-cloudimg 
      powerState: poweredOn
      storageClass: vsan-default-storage-policy
        configMapName: user-data-2
        transport: CloudInit
    apiVersion: v1
    kind: ConfigMap
        name: user-data-2
        namespace: sr-1
      user-data: |
        ssh_pwauth: true
          - name: vmware
            sudo: ALL=(ALL) NOPASSWD:ALL
            lock_passwd: false
            passwd: '$1$salt$SOC33fVbA/ZxeIwD5yw1u1'
            shell: /bin/bash
          - content: |
              VMSVC Says Hello World
            path: /helloworld
    ConfigMap contains the cloud-config blob that specifies the username and password for the guest OS.
    For more information about cloud-config specifications, see
  2. Deploy the VM.
    kubectl apply -f ubuntu-impish-vm.yaml
  3. Verify that the VM has been created.
    kubectl get vm -n ubuntu-impish-vm
    NAME              AGE
    ubuntu-impish-vm  28s
  4. Check the status of the VM and associated events.
    kubectl describe virtualmachine ubuntu-impish-vm

    The output is similar to the following. From the output, you can also obtain the IP address of the VM, which appears in the Vm Ip field.

    Name:         ubuntu-impish-vm
    Namespace:    sr-1
    Annotations: disabled
    API Version:
    Kind:         VirtualMachine
      Creation Timestamp:  2021-03-23T19:07:36Z
      Generation:  1
      Managed Fields:
      Class Name:  best-effort-medium
      Image Name:  ubuntu-impish-21.10-cloudimg
      Network Interfaces:
        Network Name:  ""
        Network Type:  nsx-t
      Power State:     poweredOn
      Storage Class:   vsan-default-storage-policy
      Vm Metadata:
        Config Map Name:  user-data-2
        Transport:        CloudInit
      Bios UUID:              4218ec42-aeb3-9491-fe22-19b6f954ce38
      Change Block Tracking:  false
        Last Transition Time:  2021-03-23T19:08:59Z
        Status:                True
        Type:                  VirtualMachinePrereqReady
      Instance UUID:           50180b3a-86ee-870a-c3da-90ddbaffc950
      Phase:                   Created
      Power State:             poweredOn
      Unique ID:               vm-73
      Vm Ip:         
    Events:                    <none>
  5. Verify that the VM IP is reachable.
    PING ( 56 data bytes
    64 bytes from icmp_seq=0 ttl=59 time=43.528 ms
    64 bytes from icmp_seq=1 ttl=59 time=53.885 ms
    64 bytes from icmp_seq=2 ttl=59 time=31.581 ms


A VM created through the VM Service can be managed only by DevOps from the Kubernetes namespace. Its life cycle cannot be managed from the vSphere Client, but vSphere administrators can monitor the VM and its resources. For more information, see Monitor Virtual Machines Available in vSphere with Tanzu.

What to do next

For additional details, see the Introducing Virtual Machine Provisioning blog.

Install the NVIDIA Guest Driver in a VM in vSphere with Tanzu

If the VM includes a PCI device configured for vGPU, after you create and boot the VM in your vSphere with Tanzu environment, install the NVIDIA vGPU graphics driver to fully enable GPU operations.


  • Make sure that the VM you created references the VM class with vGPU definition. See Add PCI Devices to a VM Class in vSphere with Tanzu.
  • Verify that you downloaded the vGPU software package from the NVIDIA download site, uncompressed the package, and have the guest drive component ready. For information, see appropriate NVIDIA Virtual GPU Software documentation.
    Note: The version of the driver component must correspond to the version of the vGPU Manager that a vSphere administrator installed on the ESXi host.


  1. Copy the NVIDIA vGPU software Linux driver package, for example, to the guest VM.
  2. Before attempting to run the driver installer, terminate all applications.
  3. Start the NVIDIA vGPU driver installer.
    sudo ./
  4. Accept the NVIDIA software license agreement and select Yes to update the X configuration settings automatically.
  5. Verify that the driver has been installed.
    For example,
    ~$ nvidia-smi
    Wed May 19 22:15:04 2021
    | NVIDIA-SMI 460.63       Driver Version: 460.63       CUDA Version: 11.2     |
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |                               |                      |               MIG M. |
    |   0  GRID V100-4Q        On   | 00000000:02:00.0 Off |                  N/A|
    | N/AN/AP0    N/A/  N/A|    304MiB /  4096MiB |      0%      Default |
    |                               |                      |                  N/A|
    | Processes:                                                                  |
    |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
    |        ID   ID                                                   Usage      |
    |  No running processes found                                                 |