Create a Workload cluster template and use it to deploy your workload clusters.

Prerequisites

To perform this operation, you require a role with Infrastructure Design privileges.

Procedure

  1. Log in to the VMware Telco Cloud Automation web interface.
  2. Go to Caas Infrastructure > Cluster Templates and click Add.
  3. In the Template Details tab, provide the following details:
    • Name - Enter the name of the Workload cluster template.
    • Cluster Type - Select Workload Cluster.
    • Description (Optional) - Enter a description for the template.
    • Tags (Optional) - Add appropriate tags to the template.
  4. Click Next.
  5. In the Cluster Configuration step, provide the following details:
    • Kubernetes Version - Select the Kubernetes version from the drop-down menu. The supported versions are 1.17.9, 1.17.11, 1.18.8, and 1.19.1.
    • CNI - Click Add and select a Container Network Interface (CNI). The supported CNIs are Multus, Calico, and Antrea. To add additional CNIs, click Add under CNI.
      Note:
      • Either Calico or Antrea and only one of them must be present. Multus is mandatory when the network functions require any CNI plugins such as SRIOV or Host-Device.
      • You can add CNI plugins such as SRIOV as a part of Node Customization when instantiating, upgrading, or updating a CNF.
      • The following CNIs or CNI plugins are available by default:
        bandwidth
        dhcp
        flannel
        host-local
        loopback
        ptp
        static
        vlan
        bridge
        firewall
        host-device
        ipvlan
        macvlan
        portmap
        sbr
        tuning
    • CSI - Click Add and select a Container Storage Interface (CSI) such as vSphere CSI or NFS Client. For more information, see https://vsphere-csi-driver.sigs.k8s.io/ and https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.
      • Timeout (Optional) (For vSphere CSI) - Enter the CSI driver call timeout in seconds. The default timeout is 300 seconds.
      • Storage Class - Enter the storage class name. This storage class is used to provision Persistent Volumes dynamically. A storage class with this name is created in the Kubernetes cluster. The storage class name defaults to vsphere-sc for the vSphere CSI type and nfs-client for the NFS Client type.
      • Default Storage Class - To set this storage class as default, enable the Default Storage Class option. The storage class defaults to True for the vSphere CSI type. It defaults to False for the NFS Client type. Only one of these types can be the default storage class.
        Note: Only one vSphere CSI type and one NFS Client type storage class can be present. You cannot add more than one storage class of the same type.
      • To add additional CSIs, click Add under CSI.
    • Tools - The current supported tool is Helm. Helm helps in troubleshooting the deployment or upgrade of a network function.
      • Helm version 3, for example, 3.3.1, is mandatory when the NFS Client CSI is added.
      • Helm version 2, for example, 2.15.2, is mandatory when the network functions deployed on this cluster depend on Helm v2.
        Note: Only one Helm version 2 and one Helm version 3 can be installed.
      Click Add and select Helm from the drop-down menu. Enter the Helm version.
  6. Click Next.
  7. In the Master Node Configuration tab, enter the following details:
    • Name - Name of the pool
    • CPU - Number of vCPUs
    • Memory - Memory in GB
    • Storage - Storage size in GB
    • Replica - Number of controller node VMs to be created. The ideal number of replicas for production or staging deployment is 3.
    • Networks - Enter the labels to group the networks. The minimum number of labels required to connect to the management network is 1. Network labels are used for providing networks inputs when deploying a cluster. Meaningful network labels such as N1, N2, N3, and so on, help the deployment users provide the correct network preferences. To add more labels, click Add.
    • Labels (Optional) - Enter the appropriate labels for this profile. These labels are applied to the Kubernetes node. To add more labels, click Add.
      Note: For the Management network, master node supports only one label.
  8. To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes nodes, click Advanced Configuration and select Use Linked Cloning for Cloning the VMs.
  9. In the Worker Node Configuration tab, add a node pool. A node pool is a set of nodes that have similar VMs. Pooling is useful when you want to group the VMs based on the number of CPUs, storage capacity, memory capacity, and so on. You can add multiple node pools with different groups of VMs. Each node pool can be deployed on a different cluster or a resource pool.
    Note: All Worker nodes in a node pool contain the same Kubelet and operating system configuration. Deploy one network function with infrastructure requirements on one node pool.
    You can create multiple node pools for the following scenarios:
    • When you require the Kubernetes cluster to be spanned across multiple vSphere clusters.
    • When the cluster is used for multiple network functions that require node customizations.
    To add a node pool, enter the following details:
    • Name - Name of the node pool
    • CPU - Number of vCPUs
    • Memory - Memory in MB
    • Storage - Storage size in GB
    • Replica - Number of controller node VMs to be created.
    • Networks - Enter the labels to group the networks. Networks use these labels to provide network inputs during a cluster deployment. Add additional labels for network types such as IPvlan, MacVLAN, and Host-Device. Meaningful network labels such as N1, N2, N3, and so on, help users provide the correct network preferences during deployment. It is mandatory to include a management interface label. SR-IOV interfaces are added to the Worker nodes when deploying the network functions.
      Note: A label length must not exceed 15 characters.
      Apart from the management network, which is always the first network, the other labels are used as interface names inside the Worker nodes. For example, when you deploy a cluster using the template with the labels MANAGEMENT, N1, and N2, the Worker nodes interface names are eth0, N1, N2. To add more labels, click Add.
    • Labels - Enter the appropriate labels for this profile. These labels are applied to the Kubernetes node and you can use them as node selectors when instantiating a network function. To add more labels, click Add.
  10. Under CPU Manager Policy, set CPU reservations on the Worker nodes as Static or Default. For information about controlling CPU Management Policies on the nodes, see the Kubernetes documentation at https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/.
    Note: For CPU-intensive workloads, use Static as the CPU Manager Policy.
  11. To use the vSphere Linked Clone feature for creating linked clones for the Kubernetes nodes, click Advanced Configuration and select Use Linked Cloning for Cloning the VMs.
  12. Click Next and review the configuration.
  13. Click Add Template.

Results

The template is created.

What to do next

Deploy a Management or Workload cluster.