Deploy a Workload cluster using the Kubernetes cluster template.

Prerequisites

  • You require a role with Infrastructure Lifecycle Management privileges.
  • You must have uploaded the Virtual Machine template to VMware Telco Cloud Automation.
  • You must have onboarded a vSphere VIM.
  • You must have created a Management cluster or uploaded a Workload cluster template.
  • A network must be present with a DHCP range and static IP of the same subnet.

Procedure

  1. Log in to the VMware Telco Cloud Automation web interface.
  2. Go to Infrastructure > CaaS Infrastructure and click Deploy Kubernetes Cluster.
    • If you have saved a validated Workload cluster configuration that you want to replicate on this cluster, click Upload on the top-right corner and upload the JSON file. The fields are then auto-populated with this configuration information and you can edit them as required. You can also use the Copy Spec function of VMware Telco Cloud automation instead of JSON file, for details, see Copy Spec and Deploy New.
    • If you want to create a Workload cluster configuration from the beginning, perform the next steps.
  3. Select a cloud on which you want to deploy the Kubernetes cluster.
  4. Click Next.
  5. The Select Cluster Template tab displays the available Kubernetes clusters. Select the Workload Kubernetes cluster template that you have created.
    Note: If the template displays as Not Compatible, edit the template and try again.
  6. Click Next.
  7. In the Kubernetes Cluster Details tab, provide the following details:
    • Name - Enter the cluster name. The cluster name must be compliant with DNS hostname requirements as outlined in RFC-952 and amended in RFC-1123.
    • Description (Optional) - Enter an optional description of the cluster.
    • Management Cluster - Select the Management cluster from the drop-down menu. You can also select a Management cluster that is deployed in a different vCenter.
    • Password - Create a password to log in to the Master node and Worker node. The default user name is capv.
      Note: Ensure that the password meets the minimum requirements displayed in the UI.
    • Confirm Password - Confirm the password that you have entered.
    • OS Image With Kubernetes - The pop-up menu displays the OS image templates in your vSphere instance that meet the criteria to be used as a Tanzu Kubernetes Grid base OS image with the selected Kubernetes version. If there are no templates, ensure that you upload them to your vSphere environment.
    • Virtual IP Address - VMware Tanzu Kubernetes Grid deploys a kube-vip pod that provides load-balancing services to the cluster API server. This kube-vip pod uses a static virtual IP address to load-balance API requests across multiple nodes. Assign an IP address that is not within your DHCP range, but in the same subnet as your DHCP range.
    • Syslog Servers - Add the syslog server IP address/FQDN for capturing the infrastructure logs of all the nodes in the cluster.
    • vSphere Cluster - Select the default vSphere cluster on which the Master and Worker nodes are deployed.
    • Resource Pool - Select the default resource pool on which the Master and Worker nodes are deployed.
    • VM Folder - Select the virtual machine folder on which the Master and Worker nodes are placed.
    • Datastore - Select the default datastore for the Master and Worker nodes to use.
    • Domain Name Servers - Enter a valid DNS IP address. These DNS servers are configured in the guest operating system of each node in the cluster. You can override this option on the Master node and each node pool of the Worker node. To add a DNS, click Add.
    • Airgap Repository - In an air-gapped environment:
      • If you have added an air-gapped repository, select the repository using the Airgap Repository drop-down menu.
      • If you have not added an air-gapped repository yet and want to add one now, select Enter Repository Details:
        • Name - Provide a name for your repository.
        • FQDN - Enter the URL of your repository.
        • CA Certificate - If your air-gapped repository uses a self-signed certificate, paste the contents of the certificate in this text box. Ensure that you copy and paste the entire certificate, from ----BEGIN CERTIFICATE---- to ----END CERTIFICATE----.
    • Harbor Repository - If you have defined a Harbor repository as a part of your Partner system, select the Harbor repository. The Harbor repository details are configured on all Master and Worker nodes.
    • NFS Client - Enter the server IP address and the mount path of the NFS client. Ensure that the NFS server is reachable from the cluster. The mount path must also be accessible to read and write.
    • vSphere CSI Datastore (Optional) - Select the vSphere CSI datastore. This datastore must be accessible from all the nodes in the cluster. If you do not select a datastore, the datastore used by the Master node is selected.
  8. Click Next.
  9. In the Master Node Configuration tab, provide the following details:
    • vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the master node, select the vSphere cluster from here.
    • Resource Pool (Optional) - If you want to use a different resource pool for the master node, select the resource pool from here.
    • Datastore (Optional) - If you want to use a different datastore for the master node, select the datastore from here.
    • Network - Associate a management or a private network. Ensure that the management network connects to a network where DHCP is enabled, and can access the VMware Photon repository.
    • Domain Name Servers - You can override the DNS. To add a DNS, click Add.
  10. Click Next.
  11. In the Worker Node Configuration tab, provide the following details for each node pool defined in the template:
    • vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the worker node, select the vSphere cluster from here.
    • Resource Pool (Optional) - If you want to use a different resource pool for the worker node, select the resource pool from here.
    • Datastore (Optional) - If you want to use a different datastore for the worker node, select the datastore from here.
    • Network - Associate a management or a private network. Ensure that the management network connects to a network where DHCP is enabled, and can access the VMware Photon repository.
  12. Click Next and review the configuration. You can download the configuration and reuse it for deploying a cluster with a similar configuration.
  13. Click Deploy.
    If the operation is successful, the cluster is created and its status changes to Active. If the operation fails, the cluster status changes to Not Active. If the cluster fails to create, delete the cluster, upload the previously downloaded configuration, and recreate it.

Results

The Workload cluster is deployed and VMware Telco Cloud Automation automatically pairs it with the cluster's site.

What to do next

  • You can view the Kubernetes clusters deployed through VMware Telco Cloud Automation from the Kubernetes Cluster tab.
  • To view more details of the Kubernetes cluster that you have deployed, go to CaaS Infrastructure > Cluster Instances and click the cluster.