Deploy a Workload cluster using the Kubernetes cluster template.

Prerequisites

  • You require a role with Infrastructure Lifecycle Management privileges.

  • You must have uploaded the Virtual Machine template to VMware Telco Cloud Automation.

  • You must have onboarded a vSphere VIM.

  • You must have created a Management cluster or uploaded a Workload cluster template.

  • A network must be present with a DHCP range and a static IP of the same subnet.

  • When you enable multi-zone, ensure that:

    • For region: vSphere Datacenter has tags attached for the selected category.

    • For zone: vSphere Cluster or hosts under the vSphere cluster has tags attached for the selected category. Ensure that vSphere Cluster and hosts under vSphere cluster do not share the same tags.

Procedure

  1. Log in to the VMware Telco Cloud Automation web interface.
  2. Go to Infrastructure > CaaS Infrastructure and click Deploy Kubernetes Cluster.
    • If you have saved a validated Workload cluster configuration that you want to replicate on this cluster, click Upload on the top-right corner and upload the JSON file. The fields are then auto-populated with this configuration information and you can edit them as required. You can also use the Copy Spec function of VMware Telco Cloud automation instead of JSON file, for details, see Copy Spec and Deploy New.

    • If you want to create a Workload cluster configuration from the beginning, perform the next steps.

  3. Select a cloud on which you want to deploy the Kubernetes cluster.
  4. Click Next.
  5. The Select Cluster Template tab displays the available Kubernetes clusters. Select the Workload Kubernetes cluster template that you have created.
    Note:

    If the template displays as Not Compatible, edit the template and try again.

  6. Click Next.
  7. In the Kubernetes Cluster Details tab, provide the following details:
    • Name - Enter the cluster name. The cluster name must be compliant with DNS hostname requirements as outlined in RFC-952 and amended in RFC-1123.

    • Description (Optional) - Enter an optional description of the cluster.

    • Management Cluster - Select the Management cluster from the drop-down menu. You can also select a Management cluster deployed in a different vCenter.

    • Password - Create a password to log in to the Master node and the Worker node. The default user name is capv.

      Note:

      Ensure that the password meets the minimum requirements displayed in the UI.

    • Confirm Password - Confirm the password that you have entered.

    • OS Image With Kubernetes - The pop-up menu displays the OS image templates in your vSphere instance that meet the criteria to be used as a Tanzu Kubernetes Grid base OS image with the selected Kubernetes version. If there are no templates, ensure that you upload them to your vSphere environment.

    • Virtual IP Address - VMware Tanzu Kubernetes Grid deploys a kube-vip pod that provides load-balancing services to the cluster API server. Thiskube-vip pod uses a static virtual IP address to load-balance API requests across multiple nodes. Assign an IP address that is not within your DHCP range, but in the same subnet as your DHCP range.

    • Syslog Servers - Add the syslog server IP address/FQDN for capturing the infrastructure logs of all the nodes in the cluster.

    • vSphere Cluster - Select the default vSphere cluster on which the Master and the Worker nodes are deployed.

    • Resource Pool - Select the default resource pool on which the Master and Worker nodes are deployed.

    • VM Folder - Select the virtual machine folder on which the Master and Worker nodes are placed.

    • Datastore - Select the default datastore for the Master and Worker nodes to use.

    • MTU (Optional) - Select the maximum transmission unit (MTU) in bytes for management interfaces of control planes and node pools. If you do not select a value, the default value is 1500.

    • Domain Name Servers - Enter a valid DNS IP address. These DNS servers are configured in the guest operating system of each node in the cluster. You can override this option on the Master node and each node pool of the Worker node. To add a DNS, click Add.

    • Airgap & Proxy Settings - Use this option when you need to configure the Airgap or the Proxy environment for VMware Telco Cloud Automation. If you do not want to use the Airgap or the Proxy, select None.

      • In an air-gapped environment:

        • If you have added an air-gapped repository, select the repository using the Airgap Repository drop-down menu.

        • If you have not added an air-gapped repository yet and want to add one now, select Enter Repository Details:

          • Name - Provide a name for your repository.

          • FQDN - Enter the URL of your repository.

          • CA Certificate - If your air-gapped repository uses a self-signed certificate, paste the contents of the certificate in this text box. Ensure that you copy and paste the entire certificate, from ----BEGIN CERTIFICATE---- to ----END CERTIFICATE----.

      • In a proxy environment

        • If you have added a proxy repository, select the repository using the Proxy Repository drop-down menu.

        • If you have not added a proxy repository yet and want to add one now, select Enter Repository Details:

          • HTTP Proxy - To route the HTTP requests through the proxy, enter the URL or full domain name of the HTTP proxy.

          • HTTPS Proxy - To route the HTTPs requests through the proxy, enter the URL or full domain name of the HTTPs proxy.

          • No Proxy - Enter the name of the local server.

          • CA Certificate - If your air-gapped repository uses a self-signed certificate, paste the contents of the certificate in this text box. Ensure that you copy and paste the entire certificate, from ----BEGIN CERTIFICATE---- to ----END CERTIFICATE----.

    • Harbor - If you have defined a Harbor repository as a part of your Partner system, click Add > Select Repository. To add a new repository, click Add > Enter Repository Detail.

      Note:

      You can add multiple Harbor repositories.

    • NFS Client - Enter the server IP address and the mount path of the NFS client. Ensure that the NFS server is reachable from the cluster. The mount path must be accessible to read and write.

    • If all the nodes inside the Kubernetes cluster do not have access to the shared datastore, you can enable multi-zone. To enable multi-zone, provide the following details in the vSphere CSI:

      Note:

      Multi-Zone feature is not supported on an existing Kubernetes cluster that is upgraded from previous VMware Telco Cloud Automation versions. It is also not supported on a newly created workload cluster from a Management cluster that is upgraded from a previous VMware Telco Cloud Automation version.

      • Enable Multi-Zone - Click the corresponding button to enable the multi-zone feature.

      • Region - Select the region from list of categories. VMware Telco Cloud Automation obtains the information of categories created in the VMware vSphere server and displays the list.

        Note:

        If you cannot find the region in the list, click Force Refresh to obtain the latest list of categories from the VMware vSphere server.

      • Zone - Select the zone from list of categories. VMware Telco Cloud Automation obtains the information of zones created in the VMware vSphere server and displays the list.

        Note:

        If you cannot find the zone in the list, click Force Refresh to obtain the latest list of categories from the VMware vSphere server.

    • vSphere CSI Datastore (Optional) - Select the vSphere CSI datastore. This datastore must be accessible from all the nodes in the cluster. This datastore is provided as parameter to default Storage Class. When you enable the multi-zone, the vSphere CSI Datastore is disabled.

  8. Click Next.
  9. In the Control Plane Node Configuration tab, provide the following details:
    • vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the master node, select the vSphere cluster from here.

    • Resource Pool (Optional) - If you want to use a different resource pool for the master node, select the resource pool from here.

    • Datastore (Optional) - If you want to use a different datastore for the master node, select the datastore from here.

    • Network - Associate a management or a private network. Ensure that the management network connects to a network where DHCP is enabled, and can access the VMware Photon repository.

    • Domain Name Servers - You can override the DNS. To add a DNS, click Add.

  10. Click Next.
  11. In the Worker Node Configuration tab, provide the following details for each node pool defined in the template:
    • vSphere Cluster (Optional) - If you want to use a different vSphere Cluster for the worker node, select the vSphere cluster from here.

    • Resource Pool (Optional) - If you want to use a different resource pool for the worker node, select the resource pool from here.

    • Datastore (Optional) - If you want to use a different datastore for the worker node, select the datastore from here.

    • Network - Associate a management or a private network. Ensure that the management network connects to a network where DHCP is enabled, and can access the VMware Photon repository.

  12. Click Next and review the configuration. You can download the configuration and reuse it for deploying a cluster with a similar configuration.
  13. Click Deploy.

    If the operation is successful, the cluster is created and its status changes to Active. If the operation fails, the cluster status changes to Not Active. If the cluster fails to create, delete the cluster, upload the previously downloaded configuration, and recreate it.

Results

The Workload cluster is deployed and VMware Telco Cloud Automation automatically pairs it with the cluster's site.

What to do next

  • You can view the Kubernetes clusters deployed through VMware Telco Cloud Automation from the Kubernetes Cluster tab.

  • To view more details of the Kubernetes cluster that you have deployed, go to CaaS Infrastructure > Cluster Instances and click the cluster.