Starting with VMware Cloud Director 10.3.1, you can create Tanzu Kubernetes Grid clusters by using the Kubernetes Container Clusters UI plug-in.

Prerequisites

Procedure

  1. Log in to VMware Cloud Director, and from the top navigation bar, select More > Kubernetes Container Clusters > New.
  2. Select the VMware Tanzu Kubernetes Grid runtime option, and click Next.
  3. Enter a name, select a Kubernetes template from the list, and click Next.
  4. In the VDC & Network window, select the organization VDC to which you want to deploy a Tanzu Kubernetes Grid cluster, select a VDC network for the cluster, and click Next.
  5. In Control Plane window, select the number of nodes, disk size, optionally select a sizing policy, a placement policy, a storage profile, and click Next.
    Note: The number of nodes input allows for clusters to have multiple control plane nodes.
  6. In Worker Pools window, enter a name, number of nodes, disk size, optionally select a sizing policy, a placement policy, a storage profile, and click Next. For more information on worker node pools, see Working with Worker Node Pools.
    Note:
    • To configure vGPU settings, select the Activate GPU toggle and select a vGPU policy. For more information on vGPU configuration, see Configuring vGPU on Tanzu Kubernetes Grid Clusters to allow AI and ML Workloads.
    • When you create clusters with vGPU functionality, it is recommended to increase the disk size to between 40-50 GB as vGPU libraries occupy a large amount storage space.
    • You can select a sizing policy in this workflow or separately in VMware Cloud Director Container Service Extension server configuration. When you select a sizing policy in conjunction with a vGPU policy that contains VM sizing, the sizing information in the vGPU policy takes precedence over the selected sizing policy. It is recommended to include sizing in your vGPU policy, and only specify a vGPU policy when you leave the Sizing Policy field empty.
  7. (Optional) To create additional worker node pools, click Add New Worker Pool, and configure worker node pool settings.
  8. Click Next.
  9. In the Kubernetes Storage window, activate the Create Default Storage Class toggle, select a storage profile and enter a storage class name.
  10. (Optional) Configure Reclaim Policy and Filesystem settings.
  11. In the Kubernetes Network window, specify a range of IP addresses for Kubernetes services and a range for Kubernetes pods, and click Next.

    Classless Inter-Domain Routing (CIDR) is a method for IP routing and IP address allocation.

    Option Description
    Pods CIDR Specifies a range of IP addresses to use for Kubernetes pods. The default value is 100.96.0.0/11. The pods subnet size must be equal to or larger than /24. You can enter one IP range.
    Services CIDR Specifies a range of IP addresses to use for Kubernetes services. The default value is 100.64.0.0/13. You can enter one IP range.
    Control Plane IP Tenant users can specify their own IP address as the control plane endpoint. They can use an external IP address from the gateway or an internal IP address from a subnet that is different from the routed IP range. If they do not specify an IP address as the control plane endpoint, VMware Cloud Director Container Service Extension server selects one of the unused IP addresses from the associated tenant gateway.
    Virtual IP Subnet Tenant users can specify a subnet CIDR from which one unused IP address is assigned as Control Plane Endpoint. The subnet must represent a set of addresses that are present in the gateway. The same CIDR is also propagated as the subnet CIDR for the ingress services on the cluster.
    You can use the following IP addresses as the Control Plane IP:
    IP Type Description
    External IP addresses Any of the IP addresses in the external gateway that connect to the OVDC network.
    Internal IP addresses Any private IP address that is internal to the tenant, with the following exceptions:
    • IP addresses in the LB network service definition, usually 192.168.255.1/24.
    • IP addresses that are in the organization VDC IP subnet.
    • IP address that is in use.
    Note: When an IP address does not have the above characteristics, the following behavior occurs:
    • If the IP address is already in use, and VMware Cloud Director detects the usage, an error appears in the logs during LB creation.
    • If the IP address is already in use, and VMware Cloud Director does not detect the usage, the behavior is undefined.
  12. In the Debug Settings window, activate or deactivate the Auto Repair on Errors toggle, and the Node Health Check toggle.
    Toggle Description
    Auto Repair on Errors This toggle applies to failures that occur during the cluster creation process. If you activate this toggle, the VMware Cloud Director Container Service Extension server attempts to recreate the clusters that are in an error state during the cluster creation process. If you deactivate this toggle, the VMware Cloud Director Container Service Extension server leaves the cluster in an error state for manual troubleshooting.
    Note: This toggle is deactivated by default in VMware Cloud Director Container Service Extension 4.1.and newer versions. Service providers must advise tenant users of this as it is a behavioral change from VMware Cloud Director Container Service Extension 4.0.
    Node Health Check In contrast to Auto Repair on Errors when the remediation process is only applicable during cluster creation, the remediation process in Node Health Check begins after the cluster reaches an available state. If any of the nodes become unhealthy during the life time of the cluster, Node Health Check detects and remediates them. For more information, see Node Health Check Configuration.
    Note: This toggle is deactivated by default in VMware Cloud Director Container Service Extension 4.2.
  13. Enter an SSH public key.
  14. Click Next.
  15. Review the cluster settings and click Finish.
    Note: In the Review window, a warning appears to advise you that the cluster contains an API token of the owner, and not to share the kubeconfig or the cluster directly with others. Instead, create the cluster as a tenant user of an organization.

Review Cluster Status

When you create a Tanzu Kubernetes Grid cluster in VMware Cloud Director Container Service Extension, the following status appear:

Table 1. Cluster Status
Cluster Status Description
Pending The cluster request has not yet been processed by the VMware Cloud Director Container Service Extension server.
Creating The cluster is currently being processed by the VMware Cloud Director Container Service Extension server.
Available The cluster is ready for users to operate on and host workloads.
Deleting The cluster is being deleted
Error The cluster is in an error state.
Note: If you want to manually debug a cluster, deactivate Auto Repair on Errors mode.