Through the vSphere Automation APIs, you can enable a vSphere cluster for managing Kubernetes workloads. A cluster configured with NSX supports running vSphere Pod and Tanzu Kubernetes clusters.

To enable a vSphere cluster for Kubernetes workload management, you use the services under the namespace_management package.

Prerequisites

  • Verify that your environment meets the system requirements for enabling vSphere with Tanzu on the cluster. For more information about the requirements, see the vSphere with Tanzu Concepts and Planning documentation.

  • Verify that the NSX is installed and configured. See Configuring NSX for vSphere with Tanzu.

  • Create storage policies for the placement of pod ephemeral disks, container images, and Supervisor control plane cache.

  • Verify that DRS is enabled in fully automated mode and HA is also enabled on the cluster.

  • Configure shared storage for the cluster. Shared storage is required for vSphere DRS, HA, and storing persistent volumes of containers.

  • Verify that the user who you use to access the vSphere Automation services has the Modify cluster-wide configuration privilege on the cluster.

  • Create a subscribed content library on the vCenter Server system to accommodate the VM image that is used for creating the nodes of the Tanzu Kubernetes clusters.

Procedure

  1. Retrieve the IDs of the tag-based storage policies that you configured for vSphere with Tanzu.
    Use the GET https://<vcenter_ip_address_or_fqdn>/api/vcenter/storage/policies request to retrieve a list of all storage policies and then filter the policies to get the IDs of the policies that you configured for the Supervisor.
  2. Retrieve the IDs of the vSphere Distributed Switch and the NSX Edge cluster that you created when configuring the NSX for vSphere with Tanzu.
    Use the POST https://<vcenter_ip_address_or_fqdn>/api/vcenter/namespace-management/networks/nsx/distributed-switches?action=check_compatibility request to list all vSphere Distributed Switches associated with the specific vSphere cluster and then retrieve the ID of the Distributed Switch that you configured to handle overlay networking for the Supervisor.

    Use the POST https://<vcenter_ip_address_or_fqdn>/api/vcenter/namespace-management/networks/nsx/edges?action=check_compatibility request to retrieve a list of the created NSX Edge clusters for the specific vSphere cluster and associated with the specific vSphere Distributed Switch. Retrieve the ID of the NSX Edge cluster that has the tier-0 gateway that you want to use for the namespaces networking.

  3. Retrieve the ID of the port group for the management network that you configured for the management traffic.
    To list the visible networks available on the vCenter Server instance that match some criteria, use the GET https://<vcenter_ip_address_or_fqdn>/api/vcenter/namespace-management/clusters/<cluster_id>/networks request and then retrieve the ID of the management network you previously configured.
  4. Create a Clusters.EnableSpec JSON object and define the parameters of the Supervisor that you want to create.

    You must specify the following required parameters of the enable specification:

    • Storage policies settings and file volume support. The storage policy you set for each of the following parameters ensures that the respective object is placed on the datastore referenced in the storage policy. You can use the same or different storage policy for the different inventory objects.

      Property

      Description

      ephemeral_storage_policy

      Specify the ID of the storage policy that you created to control the storage placement of the vSphere Pods.

      image_storage

      Set the specification of the storage policy that you created to control the placement of the cache of container images.

      master_storage_policy

      Specify the ID of the storage policy that you created to control the placement of the Supervisor control plane cache.

      Optionally, you can activate the file volume support by using cns_file_config. See Enabling ReadWriteMany Support.

    • Management network settings. Configure the management traffic settings for the Supervisor control plane.

      Property

      Description

      network_provider

      Specify the networking stack that must be used when the Supervisor is created. To use the NSX as the network solution for the cluster, set NSXT_CONTAINER_PLUGIN.

      master_management_network

      Enter the cluster network specification for the Supervisor control plane. You must enter values for the following required properties:

      • network- Use the management network ID retrieved in Step 3.

      • mode - Set STATICRANGE or DHCP for the IPv4 address assignment mode. The DHCP mode allows an IPv4 address to be automatically assigned to the Supervisor control plane by a DHCP server. You must also set the floating IP address used by the HA primary cluster by using floating_IP. Use the DHCP mode only for test purposes. The STATICRANGE mode, allows the Supervisor control plane to have a stable IPv4 address. You can use it in a production environment.

      • address_range- Optionally, you can configure the IPv4 addresses range for one or more interfaces of the management network. Specify the following settings:

        • The starting IP address that must be used for reserving consecutive IP addresses for the Supervisor control plane. Use up to 5 consecutive IP addresses.

        • The number of IP addresses in the range.

        • The IP address of the gateway associated with the specified range.

        • The subnet mask to be used for the management network.

      master_DNS

      Enter a list of the DNS server addresses that must be used from the Supervisor control plane. If your vCenter Server instance is registered with an FQDN, you must enter the IP addresses of the DNS servers that you use with the vSphere environment so that the FQDN is resolvable in the Supervisor. The list of DNS addresses must be specified in the order of preference.

      master_DNS_search_domains

      Set a list of domain names that DNS searches when looking up for a host name in the Kubernetes API server. Order the domains in the list by preference.

      master_NTP_servers

      Specify a list of IP addresses or DNS names of the NTP server that you use in your environment, if any. Make sure that you configure the same NTP servers for the vCenter Server instance, all hosts in the cluster, the NSX, and vSphere with Tanzu. If you do not set an NTP server, VMware Tools time synchronization is enabled.

    • Workload network settings. Configure the settings for the networks for the namespaces. The namespace network settings provide connectivity to vSphere Pods and namespaces created in the Supervisor.

      Property

      Description

      ncp_cluster_network_spec

      Set the specification for the Supervisor configured with the NSX networking stack. Specify the following cluster networking configuration parameters for NCPClusterNetworkEnableSpec:

      • cluster_distributed_switch - The vSphere Distributed Switch that handles overlay networking for the Supervisor.

      • nsx_edge_cluster - The NSX Edge cluster that has tier-0 gateway that you want to use for namespace networking.

      • nsx_tier0_gateway- The tier-0 gateway that is associated with the cluster tier-1gateway. You can retrieve a list of NSXTier0Gateway objects associated with a particular vSphere Distributed Switch and determine the ID of the tier-0 gateway you want to set.

      • namespace_subnet_prefix - The subnet prefix that defines the size of the subnet reserved for namespaces segments. Default is 28.

      • routed_mode - The NAT mode of the workload network. If set to false:
        • The IP addresses of the workloads are directly accessible from outside the tier-o gateway and you do not need to configure the egress CIDRs.
        • File Volume storage is not supported.
        Default is true.
      • egress_cidrs - The external CIDR blocks from which the NSX Manager assigns IP addresses used for performing source NAT (SNAT) from internal vSphere Pods IP addresses to external IP addresses. Only one egress IP address is assigned for each namespace in the Supervisor. These IP ranges must not overlap with the IP ranges of the vSphere Pods, ingress, Kubernetes services, or other services running in the data center.

      • ingress_cidrs- The external CIDR blocks from which the ingress IP range for the Kubernetes services is determined. These IP ranges are used for load balancer services and Kubernetes ingress. All Kubernetes ingress services in the same namespace share a common IP address. Each load balancer service is assigned a unique IP address. The ingress IP ranges must not overlap with the IP ranges of the vSphere Pods, egress, Kubernetes services, or other services running in the data center.

      • pod_cidrs - The internal CIDR blocks from which the IP ranges for vSphere Pods are determined. The IP ranges must not overlap with the IP ranges of the ingress, egress, Kubernetes services, or other services running in the data center. All vSphere Pods CIDR blocks must be of at least /23 subnet size.

      worker_DNS

      Set a list of the IP addresses of the DNS servers that must be used on the worker nodes. Use different DNS servers than the ones you set for the Supervisor control plane.

      service_cidr

      Specify the CIDR block from which the IP addresses for Kubernetes services are allocated. The IP range must not overlap with the ranges of the vSphere Pods, ingress, egress, or other services running in the data center.

      For the Kubernetes services and the vSphere Pods, you can use the default values which are based on the cluster size that you specify.

    • Supervisor size. You must set a size to the Supervisor which affects the resources allocated to the Kubernetes infrastructure. The cluster size also determines default maximum values for the IP addresses ranges for the vSphere Pods and Kubernetes services running in the cluster. You can use the GET https://<vcenter_ip_address_or_fqdn>/api/vcenter/namespace-management/cluster-size-info calls to retrieve information about the default values associated with each cluster size.

    • Optional. Associate the Supervisor with the subscribed content library that you created for provisioning Tanzu Kubernetes clusters. See Creating, Securing, and Synchronizing Content Libraries for Tanzu Kubernetes Releases.

      To set the library, use default_kubernetes_service_content_library and pass the subscribed content library ID.

  5. Enable vSphere with Tanzu on a specific cluster by passing the cluster enable specification to the Clusters service.

Results

A task runs on vCenter Server for turning the cluster into a Supervisor. Once the task completes, Kubernetes control plane nodes are created on the hosts that are part of the cluster enabled with vSphere with Tanzu. Now you can create vSphere Namespaces.

What to do next

Create and configure namespaces on the Supervisor. See Create a vSphere Namespace.