Create a Workload cluster template and use it for deploying your workload clusters.

Prerequisites

To perform this operation, you require a role with Infrastructure Design privileges.

Procedure

  1. Log in to the VMware Telco Cloud Automation web interface.
  2. Go to Infrastructure > Caas Infrastructure > Cluster Templates and click Add.
  3. In the Template Details tab, provide the following details:
    • Name - Enter the name of the Workload cluster template.
    • Cluster Type - Select Workload Cluster.
    • Description (Optional) - Enter a description for the template.
    • Tags (Optional) - Add appropriate tags to the template.
  4. Click Next.
  5. In the Cluster Configuration step, provide the following details:
    • Kubernetes Version - Select the Kubernetes version from the drop-down menu. For the list of supported versions, see Supported Features on Different VIM Types.
    • CNI - Click Add and select a Container Network Interface (CNI). The supported CNIs are Multus, Calico, and Antrea. To add additional CNIs, click Add under CNI.
      Note:
      • Either Calico or Antrea and only one of them must be present. Multus is mandatory when the network functions require any CNI plug-ins such as SRIOV or Host-Device.
      • You can add CNI plug-ins such as SRIOV as a part of Node Customization when instantiating, upgrading, or updating a CNF.
      • The following CNIs or CNI plug-ins are available by default:
        Note: VMware Telco Cloud Automation does not support dhcp in an IPv6 environment.
        bandwidth
        dhcp
        flannel
        host-local
        loopback
        ptp
        static
        vlan
        bridge
        firewall
        host-device
        ipvlan
        macvlan
        portmap
        sbr
        tuning
    • CSI - Click Add and select a Container Storage Interface (CSI) such as vSphere CSI or NFS Client. For more information, see https://vsphere-csi-driver.sigs.k8s.io/ and https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.
      Note: You can create a persistence volume using vSphere CSI only if all nodes in cluster have access to shared datastore.
      • Timeout (Optional) (For vSphere CSI) - Enter the CSI driver call timeout in seconds. The default timeout is 300 seconds.
      • Storage Class - Enter the storage class name. This storage class is used to provision Persistent Volumes dynamically. A storage class with this name is created in the Kubernetes cluster. The storage class name defaults to vsphere-sc for the vSphere CSI type and nfs-client for the NFS Client type.
      • Default Storage Class - To set this storage class as default, enable the Default Storage Class option. The storage class defaults to True for the vSphere CSI type. It defaults to False for the NFS Client type. Only one of these types can be the default storage class.
        Note: Only one vSphere CSI type and one NFS Client type storage class can be present. You cannot add more than one storage class of the same type.
      • To add additional CSIs, click Add under CSI.
    • Tools - The current supported tool is Helm. Helm helps in troubleshooting the deployment or upgrade of a network function.
      • Helm 3.x is pre-installed in the cluster and the option to select the Helm 3.x is removed from cluster template.
      • Helm version 2 is mandatory when the network functions deployed on this cluster depend on Helm v2. The supported Helm 2 version is 2.17.0.
      • If you provide Helm version 2, VMware Telco Cloud Automation automatically deploys Tiller pods in the Kubernetes cluster. If you require Helm CLI to interact with your Kubernetes cluster for debugging purposes, install Helm CLI manually.
      • Note:
        • If you require any other version of Helm, apart from the installed versions, you must install the required versions manually.
      Click Add and select Helm from the drop-down menu. Enter the Helm version.
  6. Click Next.
  7. In the Master Node Configuration tab, enter the following details:
    • Name - Name of the pool. The node pool name cannot be greater than 36 characters.
    • CPU - Number of vCPUs
    • Memory - Memory in GB
    • Storage - Storage size in GB. Minimum disk size required is 50 GB.
    • Replica - Number of controller node VMs to be created. The ideal number of replicas for production or staging deployment is 3.
    • Networks - Enter the labels to group the networks. The minimum number of labels required to connect to the management network is 1. Network labels are used for providing networks inputs when deploying a cluster. Meaningful network labels such as N1, N2, N3, and so on, help the deployment users provide the correct network preferences. To add more labels, click Add.
    • Labels (Optional) - Enter the appropriate labels for this profile. These labels are applied to the Kubernetes node. To add more labels, click Add.
      Note: For the Management network, master node supports only one label.
  8. To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes nodes, click Advanced Configuration and select Use Linked Cloning for Cloning the VMs.
  9. In the Worker Node Configuration tab, add a node pool. A node pool is a set of nodes that have similar VMs. Pooling is useful when you want to group the VMs based on the number of CPUs, storage capacity, memory capacity, and so on. You can add multiple node pools with different groups of VMs. Each node pool can be deployed on a different cluster or a resource pool.
    Note: All Worker nodes in a node pool contain the same Kubelet and operating system configuration. Deploy one network function with infrastructure requirements on one node pool.
    You can create multiple node pools for the following scenarios:
    • When you require the Kubernetes cluster to be spanned across multiple vSphere clusters.
    • When the cluster is used for multiple network functions that require node customizations.
    To add a node pool, enter the following details:
    • Name - Name of the node pool. The node pool name cannot be greater than 36 characters.
    • CPU - Number of vCPUs
    • Memory - Memory in MB
    • Storage - Storage size in GB. Minimum disk size required is 50 GB.
    • Replica - Number of controller node VMs to be created.
    • Networks - Enter the labels to group the networks. Networks use these labels to provide network inputs during a cluster deployment. Add additional labels for network types such as IPvlan, MacVLAN, and Host-Device. Meaningful network labels such as N1, N2, N3, and so on, help users provide the correct network preferences during deployment. It is mandatory to include a management interface label. SR-IOV interfaces are added to the Worker nodes when deploying the network functions.
      Note: A label length must not exceed 15 characters.
      Apart from the management network, which is always the first network, the other labels are used as interface names inside the Worker nodes. For example, when you deploy a cluster using the template with the labels MANAGEMENT, N1, and N2, the Worker nodes interface names are eth0, N1, N2. To add more labels, click Add.
    • Labels - Enter the appropriate labels for this profile. These labels are applied to the Kubernetes node and you can use them as node selectors when instantiating a network function. To add more labels, click Add.
  10. Under CPU Manager Policy, set CPU reservations on the Worker nodes as Static or Default. For information about controlling CPU Management Policies on the nodes, see the Kubernetes documentation at https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/.
    Note: For CPU-intensive workloads, use Static as the CPU Manager Policy.
  11. To enable Machine Health Check, select Configure Machine Health Check. For more information, see Machine Health Check.
  12. Under Advanced Configuration, you can configure the Node Start Up Timeout duration and set the unhealthy conditions.
    1. (Optional) Enter the Node Start Up Timeout time duration for Machine Health Check to wait for a node to join the cluster. If a node does not join during the specified time, Machine Health Check considers it unhealthy.
    2. Set unhealthy conditions for the nodes. If any of these conditions are met, Machine Health Check considers these nodes as unhealthy and starts the remediation process.
  13. To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes nodes, click Advanced Configuration and select Use Linked Cloning for Cloning the VMs.
  14. Click Next and review the configuration.
  15. Click Add Template.

Results

The template is created.

What to do next

Deploy a Management or Workload cluster.