Deploy a Workload cluster using the Kubernetes cluster template.

Prerequisites

  • You require a role with Infrastructure Lifecycle Management privileges.
  • You must have uploaded the Virtual Machine template to VMware Telco Cloud Automation.
  • You must have onboarded a vSphere VIM.
  • You must have created a Management cluster or uploaded a Workload cluster template.
  • A network must be present with a DHCP range and a static IP of the same subnet.
  • When you enable multi-zone, ensure that:
    • For region: vSphere Datacenter has tags attached for the selected category.
    • For zone: vSphere Cluster or hosts under the vSphere cluster has tags attached for the selected category. Ensure that vSphere Cluster and hosts under vSphere cluster does not share the same tags.

Procedure

  1. Log in to the VMware Telco Cloud Automation web interface.
  2. Go to Infrastructure > CaaS Infrastructure and click Deploy Cluster.
  3. From the drop-down menu, select Workload Cluster (v1).
    • If you have saved a validated Workload cluster configuration that you want to replicate on this cluster, click Upload on the top-right corner and upload the JSON file. The fields are then auto-populated with this configuration information and you can edit them as required. You can also use the Copy Spec function of VMware Telco Cloud automation instead of JSON file, for details, see Copy Spec and Deploy New.
    • If you want to create a Workload cluster configuration from the beginning, perform the next steps.
  4. Select a cloud on which you want to deploy the Kubernetes cluster.
  5. Click Next.
  6. The Select Cluster Template tab displays the available Kubernetes clusters. Select the Workload Kubernetes cluster template that you have created.
    Note: If the template displays as Not Compatible, edit the template and try again.
  7. Click Next.
  8. In the Kubernetes Cluster Details tab, provide the following details:
    • Name - Enter the cluster name. The cluster name must be compliant with DNS hostname requirements as outlined in RFC-952 and amended in RFC-1123.
    • Description (Optional) - Enter an optional description of the cluster.
    • Management Cluster - Select the Management cluster from the drop-down menu. You can also select a Management cluster deployed in a different vCenter.
      Note: The options available for the workload cluster depends on the configurations of the management cluster.
    • Password - Create a password to log in to the Master node and the Worker node. The default user name is capv.
      Note: Ensure that the password meets the minimum requirements displayed in the UI.
    • Confirm Password - Confirm the password that you have entered.
    • OS Image With Kubernetes - The pop-up menu displays the OS image templates in your vSphere instance that meet the criteria to be used as a Tanzu Kubernetes Grid base OS image with the selected Kubernetes version. If there are no templates, ensure that you upload them to your vSphere environment.
    • IP Version - Whether to use the IPv4 or IPv6 for cluster deployment. Select the value from the drop-down list.
    • Virtual IP Address - VMware Tanzu Kubernetes Grid deploys a kube-vip pod that provides load-balancing services to the cluster API server. This kube-vip pod uses a static virtual IP address to load-balance API requests across multiple nodes. Assign an IP address that is not within your DHCP range, but in the same subnet as your DHCP range.
    • Harbor Repository - If you have defined a Harbor repository as a part of your Partner system, select the Harbor repository. The Harbor repository details are configured on all Master and Worker nodes.
    • Syslog Servers - Add the syslog server IP address/FQDN for capturing the infrastructure logs of all the nodes in the cluster.
    • vSphere Cluster - Select the default vSphere cluster on which the Master and Worker nodes are deployed.
    • Resource Pool - Select the default resource pool on which the Master and Worker nodes are deployed.
    • VM Folder - Select the virtual machine folder on which the Master and Worker nodes are placed.
    • Datastore - Select the default datastore for the Master and Worker nodes to use.
    • (Optional) MTU - Select the maximum transmission unit (MTU) in bytes for management interfaces of control planes and node pools. If you do not select a value, the default value is 1500.
    • Domain Name Servers - Enter a valid DNS IP address. These DNS servers are configured in the guest operating system of each node in the cluster. You can override this option on the Master node and each node pool of the Worker node. To add a DNS, click Add.
    • Airgap Repository - In an air-gapped environment:
      • If you have added an air-gapped repository, select the repository using the Airgap Repository drop-down menu.
      • If you have not added an air-gapped repository yet and want to add one now, select Enter Repository Details:
        • Name - Provide a name for your repository.
        • FQDN - Enter the URL of your repository.
        • (Optional) CA Certificate - If your air-gapped repository uses a self-signed certificate, paste the contents of the certificate in this text box. Ensure that you copy and paste the entire certificate, from ----BEGIN CERTIFICATE---- to ----END CERTIFICATE----.
      • Proxy Settings - In an air-gapped environment:
        • If you have added a proxy, select the ptoxy using the Proxy Repository drop-down menu.
        • If you have not added a proxy yet and want to add one now, select Enter Proxy Details:
          • HTTP Proxy - Enter the FQDN or the IP of the proxy server that handles the HTTP request. You must use the format FQDN:Port or IP:Port.
          • HTTPS Proxy - To route the HTTPs requests through proxy, enter the URL or full domain name of HTTPs proxy. You must use the format FQDN:Port or IP:Port.
          • (Optional) No Proxy - Enter the FQDN or the IP of the system that can bypass the proxy server.
            Note: You must add the cluster node network CIDR, vCenter FQDN(s), harbor FQDN(s) and any other host that you want to bypass the proxy in this list.
          • (Optional) CA Certificate - If your proxy uses a self-signed certificate, paste the contents of the certificate in this text box. Ensure that you copy and paste the entire certificate, from ----BEGIN CERTIFICATE---- to ----END CERTIFICATE----.
    • NFS Client - Enter the server IP address and the mount path of the NFS client. Ensure that the NFS server is reachable from the cluster. The mount path must also be accessible to read and write.
      Note: For IPv6 cluster, you must use NFS server FQDN instead of IP address.
    • If all the nodes inside kubernetes cluster does not have access to shared datastore, you can enable multi zone. To enable multi-zone, provide the following details in the vSphere CSI:
      Note: Multi-Zone feature is not supported on an existing Kubernetes cluster that is upgraded from previous VMware Telco Cloud Automation versions. It is also not supported on a newly created workload cluster from a Management cluster that is upgraded from a previous VMware Telco Cloud Automation version.
      • Enable Multi-Zone - Click the corresponding button the enable the multi-zone feature.
      • Region - Select the region from list of categories. VMware Telco Cloud Automation obtains the information of categories created in the VMware vSphere server and displays the list.
        Note: If you cannot find the region in the list, click Force Refresh to obtain the latest list of categories from the VMware vSphere server.
      • Zone - Select the zone from list of categories. VMware Telco Cloud Automation obtains the information of zones created in the VMware vSphere server and displays the list.
        Note: If you cannot find the zone in the list, click Force Refresh to obtain the latest list of categories from the VMware vSphere server.
    • vSphere CSI Datastore (Optional) - Select the vSphere CSI datastore. This datastore must be accessible from all the nodes in the cluster. This datastore is provided as parameter to default Storage Class. When you enable the multi-zone, the vSphere CSI Datastore is disabled.
  9. Click Next.
  10. In the Control Plane Node Configuration tab, provide the following details:
    • vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the master node, select the vSphere cluster from here.
    • Resource Pool (Optional) - If you want to use a different resource pool for the master node, select the resource pool from here.
    • Datastore (Optional) - If you want to use a different datastore for the master node, select the datastore from here.
    • Network - Associate a management or a private network. Ensure that the management network connects to a network where DHCP is enabled, and can access the VMware Photon repository.
    • Domain Name Servers - You can override the DNS. To add a DNS, click Add.
  11. Click Next.
  12. In the Worker Node Configuration tab, provide the following details for each node pool defined in the template:
    • vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the worker node, select the vSphere cluster from here.
    • Resource Pool (Optional) - If you want to use a different resource pool for the worker node, select the resource pool from here.
    • Datastore (Optional) - If you want to use a different datastore for the worker node, select the datastore from here.
    • Network - Associate a management or a private network. Ensure that the management network connects to a network where DHCP is enabled, and can access the VMware Photon repository.
  13. Click Next and review the configuration. You can download the configuration and reuse it for deploying a cluster with a similar configuration.
  14. Click Deploy.
    If the operation is successful, the cluster is created and its status changes to Active. If the operation fails, the cluster status changes to Not Active. If the cluster fails to create, delete the cluster, upload the previously downloaded configuration, and recreate it.

Results

The Workload cluster is deployed and VMware Telco Cloud Automation automatically pairs it with the cluster's site.

What to do next

  • You can view the Kubernetes clusters deployed through VMware Telco Cloud Automation from the Kubernetes Cluster tab.
  • To view more details of the Kubernetes cluster that you have deployed, go to CaaS Infrastructure > Cluster Instances and click the cluster.