You can add a node pool to a Kubernetes Workload cluster.

Procedure

  1. Log in to the VMware Telco Cloud Automation web interface.
  2. Go to Infrastructure > CaaS Infrastructure > Cluster Instances.
  3. Click the kubernetes cluster name that you want to configure.
  4. Click Node Pools > Add Node Pool.
  5. In the Node Pool Details window, enter the following information:
    • Name: Enter the name of the node pool. The node pool name cannot be greater than 36 characters.

    • Destination Cloud: Select the cloud for the node pool.

    • Datacenter: Select the datacenter for the node pool.
    • Resource Pool: Select the resource pool for the node pool.

    • VM Folder: Select the folder for the node pool machines.

    • Datastore: To use a different datastore, select the datastore.

    • VM Template: Select the template for the node pool machines.

    • Replica: Select the number of controller node virtual machines.

    • CPU: Select the number of virtual CPUs in the node pool.

    • Cores per Socket (Optional): Select the number of cores per socket in the node pool.
    • Memory: Select the amount of memory for the node pool.

    • Disk Size: Select the disk size. The minimum disk size must be 50 GB.
      Note: For BYOI TKG template, the minimum disk size is 70 GB.
    • Network: You can add the network details.

      • Network: Select the network that you want to associate with the label.

      • IPAM Type: Select DHCP or IP Pool.
        Note: You must provide a DNS server for the IP pool. It is optioanl for DHCP.
      • IP Pool: Select the IP pool that you want to use for the workload cluster.
        Note: The IP addresses that you added to the management cluster's IP pool are available for selection. Therefore, ensure that the management cluster you selected in the Destination Info section has an IP pool.
      • (Optional)

        MTU: Provide the MTU value for the network. The minimum MTU value is 1500. The maximum MTU value depends on the configuration of the network switch.

      • (Optional)

        DNS: Enter a valid DNS IP address as Domain Name Servers. These DNS servers are configured in the guest operating system of each node in the cluster. You can override this option on the Primary node and each node pool of the Worker node. Multiple DNS servers can be separated by commas.

    • Labels: Add key-value pair labels to your nodes, to be used as node selectors when instantiating a network function.

    • Variables: Click Add Variable and perform the following:
      1. Select workerKubeletExtraArgs from the drop-down menu.
      2. Enter the YAML code to specify the worker kubelet flags.

        Sample code to set the maximum limit of worker pods to 50:

        max-pods: '50'
        read-only-port: '10255'
        max-open-files: '100000'
      Note: This variable is not applicable for standard and classy single node clusters.
    • Advanced Options
      • Maintenance Mode: Select the check box to enable the maintenance mode.
      • Clone Mode: Specify the type of clone operation:
        • Linked Clone: If the template contains at least one snapshot, select this option.
          Note: The Linked Clone mode ignores the DiskGiB field as it is not allowed to expand the disks of linked clones.
        • Full Clone: If the template has no snapshots, select this option.
          Note: If the source of your clone has no snapshots, the system automatically defaults to Full Clone.
      • Configure Machine Health Check: Select the check box to configure machine health check. For more information, see Machine Health Check.

      • Node Pool Upgrade Strategy (YAML): Activate or deactivate the node pool upgrade process. If you activate it, during the node pool upgrade process, the existing node is deleted before creating a new node. If you deactivate it, during the node pool upgrade process, the new node is first created and then the existing node is deleted.
      • Kubeadmin Config Template (YAML): Activate or deactivate the Kubeadmin Config Template YAML.
        Note: The Kubeadmin Config Template (YAML) field is enabled by default for a Classy Single Node Cluster as this cluster is deployed at the RAN edge site and used to instantiate vDU CNF POD. Therefore, you must configure the static CPU manager policy on the Kubernetes node. The following YAML code is used to configure the Kubernetes node.
        joinConfiguration:
          nodeRegistration:
            kubeletExtraArgs:
              cpu-manager-policy: static
              kube-reserved: 'cpu=1,memory=1Gi'
              system-reserved: 'cpu=1,memory=1Gi'
        
        Note:

        For CPU-intensive workloads, use Static as the CPU Manager Policy.

        For information on controlling CPU Management Policies on the nodes, see the Kubernetes documentation.

  6. Click Apply.
  7. Click Next > Deploy New Node Pool(s).

Results

You have successfully added the node pool of a Kubernetes cluster instance.