Create a Workload cluster template and use it for deploying your workload clusters.

Prerequisites

You require a role with Infrastructure Design privileges.

Procedure

  1. Log in to the VMware Telco Cloud Automation web interface.
  2. Go to Infrastructure > Caas Infrastructure > Cluster Templates.
  3. Click Add and select Workload Cluster Template.
  4. In the Create Workload Cluster Template wizard, enter information for each of the sub-categories:
  5. Template Info
    • Name - Enter the name of the Workload cluster template.
  6. 1. Destination Info
    • Management Cluster - Select the Management cluster from the drop-down menu. You can also select a Management cluster deployed in a different vCenter.

    • Destination Cloud - Select a cloud on which you want to deploy the Kubernetes cluster.

    • Datacenter - Select a data center that is associated with the cloud.

    Advanced Options - Provide the secondary cloud information here. These options are applicable when creating stretch clusters.

    • (Optional)

      Secondary Cloud - Select the secondary cloud. It is required for stretched cluster creation.

    • (Optional)

      Secondary Data Center - Select the secondary data center.

    • (Optional)

      NF Orchestration VIM - Provide the details of the VIM. VMware Telco Cloud Automation uses this VIM and associated Control Planes for NF life cycle management.

  7. Click Next.
  8. 2. Cluster Info
    • TCA BOM Release - The TCA BOM Release file contain information about the Kubernetes version and add-on versions. You can select multiple BOM release files.

    • CNI - Select a Container Network Interface (CNI) such as Antrea or Calico.

    • Proxy Repository Access - Available only when the selected management cluster uses a proxy repository. Select the proxy repository from the drop-down list.

    • Airgap Repository Access - Available only when the selected management cluster uses a airgap repository. Select the airgap repository from the drop-down list.

    • IP Version - The IP version specified in the Management cluster is displayed here.

    • Cluster End Point - Enter the IP of the API server loadbalancer.

    • Cluster (pods) CIDR - Enter the IP for clusters. VMware Telco Cloud Automation uses the CIDR pool to assign IP addresses to pods in the cluster.

    • Service CIDR - Enter the IP for clusters. VMware Telco Cloud Automation uses the CIDR pool to assign IP addresses to the services in the cluster.

    • Enable Autoscaler - Click the toggle button to activate the autoscaler feature.

      The autoscaler feature automatically controls the replica count on the node pool by increasing or decreasing the replica counts based on the workload. If you activate this feature for a particular cluster, you cannot deactivate it after the deployment.When you activate the autoscaler feature, the following fields are displayed:

      Note:

      The values in these fields are automatically populated from the cluster. However, you can edit the values.

      • Min Size - Sets a minimum limit to the number of worker nodes that autoscaler should decrease.

      • Max Size - Sets a maximum limit to the number of worker nodes that autoscaler should increase.

      • Max Node - Sets a maximum limit to the number of worker and control plane nodes that autoscaler should increase. The default value is 0.

      • Max Node Provision Time - Sets the maximum time that autoscaler should wait for the nodes to be provisioned. The default value is 15 minutes.

      • Delay After Add - Sets the time limit for the autoscaler to start the scale-down operation after a scale-up operation. For example, if you specify the time as 10 minutes, autoscaler resumes the scale-down scan after 10 minutes of adding a node.

      • Delay After Failure - Sets the time limit for the autoscaler to restart the scale-down operation after a scale-down operation fails. For example, if you specify the time as 3 minutes and there is a scale-down failure, the next scale-down operation starts after 3 minutes.

      • Delay After Delete - Sets the time limit for the autoscaler to start the scale-down operation after deleting a node. For example, if you specify the time as 10 minutes, autoscaler resumes the scale-down scan after 10 minutes of deleting a node.

      • Unneeded Time - Sets the time limit for the autoscaler to scale-down an unused node. For example, if you specify the time as 10 minutes, any unused node is scaled down only after 10 minutes.

  9. Click Next.
  10. Control Plane Info
    • To configure Control Plane node placement, click the Settings icon in the Control Plane Node Placement table.

      • Name - Enter the name of the Control Plane node.

      • Destination Cloud - The destination cloud is selected by default. To make a different selection, use the drop-down menu.

      VM Placement

      • Datacenter - Select a data center for the Control Plane node.

      • Resource Pool - Select the default resource pool on which the Control Plane node is deployed.

      • VM Folder - Select the virtual machine folder on which the Control Plane node is placed.

      • Datastore - Select the default datastore for the Control Plane node.

      • VM Template - Select a VM template.

      VM Size

      • Number of Replicas - Number of controller node VMs to be created. The ideal number of replicas for production or staging deployment is 3.

      • Number of vCPUs - To ensure that the physical CPU core is used by the same node, provide an even count of vCPUs if the underlying ESXi host is hyper threading-enabled, and if the network function requires NUMA alignment and CPU reservation.

      • Cores Per Socket (Optional) - Enter the number of cores per socket if you require more that 64 cores.

      • Memory - Enter the memory in GB.

      • Disk Size - Enter the disk size in GB.

      Network

      • Management Network - Select the Management network.

      • MTU - Enter the maximum transmission unit in bytes.

      • DNS - Provide comma-separated primary and secondary DNS servers.

      Labels

      • To add the appropriate labels for this profile, click Add Label. These labels are added to the Kubernetes node.

      Advanced Options

      • Clone Mode - Specify the type of clone operation. Linked Clone is supported on templates that have at least one snapshot. Otherwise, the clone mode defaults to Full Clone.

      • Certificate Expiry Days - Specify the number of days for automatic certificate renewal by TKG before its expiry. By default, the certificate expires after 365 days. If you specify a value in this field, the certificate is automatically renewed before the set number of days. For example, if you specify the number of days as 50, the certificate is renewed 50 days before its expiry, which is after 315 days.
        The default value is 90 days. The minimum number of days you can specify is 7 and the maximum is 180.
        Note: You cannot edit the number of days after you deploy the cluster.
      • Kubeadmin Config Template (YAML) - Enable or deactivate the Kubeadmin Config Template YAML.

    • Click Apply.

  11. Add-Ons

    To deploy an add-on such as NFS Client or Harbor, click Deploy Add-on.

    1. From the Select Add-On wizard, select the add-on and click Next.

    2. For add-on configuration information, see #GUID-18C37109-38A5-431D-A130-DD45C6E6AE96.

  12. Click Next.
  13. Node Pools
    • A node pool is a set of nodes that have similar properties. Pooling is useful when you want to group the VMs based on the number of CPUs, storage capacity, memory capacity, and so on. You can add one node pool to a Management cluster and multiple node pools to a Workload cluster, with different groups of VMs. To add a Worker node pool, click Add Worker Node Pool.

      • Name - Enter the name of the node pool.

      • Destination Cloud - The destination cloud is selected by default. To make a different selection, use the drop-down menu.

      VM Placement

      • Datacenter - Select a data center for the node pool.

      • Resource Pool - Select the default resource pool on which the node pool is deployed.

      • VM Folder - Select the virtual machine folder on which the node pool is placed.

      • Datastore - Select the default datastore for the node pool.

      • VM Template - Select a VM template.

      • Enable Autoscaler - This field is available only if autoscaler is enabled for the associated cluster. At the node level, you can activate or deactivate autoscaler based on your requirement.

        The following field values are automatically populated from the cluster.

        • Min Size (Optional) - Sets a minimum limit to the number of worker nodes that autoscaler should scale down. Edit the value, as required.

        • Max Size (Optional) - Sets a maximum limit to the number of worker nodes that autoscaler should scale up. Edit the value, as required.

          Note:
          • Using autoscaler on a cluster does not automatically change its node group size. Therefore, changing the maximum or minimum size does not scale up or scale down the cluster size. When you are editing the autoscaler-configured maximum size of the node pool, ensure that the maximum size limit of the node pool is lesser than or equal to the current replica count.
          • When a scale-down is in progress, it is not recommended to edit the maximum size of the cluster.
          • You can view the scale-up and scale-down events under the Events tab of the Telco Cloud Automation portal.

      VM Size

      • Number of Replicas - Number of node pool VMs to be created. The ideal number of replicas for production or staging deployment is 3.

        Note:

        The Number of Replicas field is unavailable if autoscaler is enabled for the node.

      • Number of vCPUs - To ensure that the physical CPU core is used by the same node, provide an even count of vCPUs if the underlying ESXi host is hyper threading-enabled, and if the network function requires NUMA alignment and CPU reservation.

      • Cores Per Socket (Optional) - Enter the number of cores per socket if you require more that 64 cores.

      • Memory - Enter the memory in GB.

      • Disk Size - Enter the disk size in GB.

      Network

      • Management Network - Select the Management network.

      • MTU - Enter the maximum transmission unit in bytes.

      • DNS - Provide comma-separated primary and secondary DNS servers.

      • ADD NETWORK DEVICE - Click this button to add a dedicated NFS interface to the node pool, select the interface, and then enter the following:

        • Interface Name - Enter the interface name as tkg-nfs to reach the NFS server.

      Labels

      • To add the appropriate labels for this profile, click Add Label. These labels are added to the Kubernetes node.

      Advanced Options

      • Clone Mode - Specify the type of clone operation. Linked Clone is supported on templates that have at least one snapshot. Otherwise, the clone mode defaults to Full Clone.

      • To enable Machine Health Check, select Configure Machine Health Check

      • Kubeadmin Config Template (YAML) - Enable or deactivate the Kubeadmin Config Template YAML.

    • Click Apply.

  14. 6. Ready to Create - Click CREATE CLUSTER TEMPLATE.