Starting with vSphere 7.0 Update 1, you can select between creating a Supervisor with the vSphere networking stack or with NSX as the networking solution. A Supervisor that is configured with the vSphere networking stack only supports Tanzu Kubernetes clusters. vSphere Pods are not supported.
To enable a cluster configured with the vSphere networking stack for Kubernetes workloads management, you must use the services under the namespace_management package.
Prerequisites
- Verify that your environment meets the system requirements for enabling vSphere IaaS control plane on the cluster. For more information about the requirements, see the documentation.
- Verify that DRS is enabled in fully automated mode and HA is also enabled on the cluster.
- Configure shared storage for the cluster. Shared storage is required for vSphere DRS, HA, and storing persistent volumes of containers.
- Create storage policies for the placement of Kubernetes control planes.
- Create a subscribed content library on the vCenter Server system to accommodate the VM image that is used for creating nodes of Tanzu Kubernetes clusters. See Creating, Securing, and Synchronizing Content Libraries for Tanzu Kubernetes Releases.
- Add all hosts from the cluster to a vSphere Distributed Switch and create port groups for workload networks. See Configuring the vSphere Networking Stack for vSphere IaaS control plane.
- Configure an HAProxy load balancer instance that is routable to the vSphere Distributed Switch that is connected to the hosts from the vSphere cluster.
- Verify that the user who you use to access the vSphere Automation services has the Namespaces.Manage privilege on the cluster.
Procedure
- Retrieve the ID of the cluster which hosts were added to the vSphere Distributed Switch.
Use the
GET https://<vcenter_ip_address_or_fqdn>/api/vcenter/namespace-management/cluster-compatibility
request to filter the clusters by using their network providers. To retrieve a list of all clusters in the
vCenter Server system which are configured with the vSphere networking stack, set the network provider in the filter specification to
VSPHERE_NETWORK.
- Retrieve the IDs of the tag-based storage policies that you configured for vSphere IaaS control plane.
Use the
GET https://<vcenter_ip_address_or_fqdn>/api/vcenter/storage/policies
request to retrieve a list of all storage policies and then filter the policies to get the IDs of the policies that you configured for the
Supervisor.
- Retrieve the ID of the port group for the management network that you configured for the management traffic.
To list the visible networks available on the
vCenter Server instance that match some criteria and then retrieve the ID of the management network you previously configured, use the
GET https://<vcenter_ip_address_or_fqdn>/api/vcenter/namespace-management/clusters/<cluster_id>/networks
request.
- Create a Supervisor enable specification and define the parameters of the Supervisor that you want to enable.
You must specify the following required parameters of the enable specification:
- Supervisor size. You must set a size to the Supervisor which affects the resources allocated to the Kubernetes infrastructure. The cluster size also determines default maximum values for the IP addresses ranges for the vSphere Pod and Kubernetes services running in the cluster. You can use the GET https://<server>/api/vcenter/namespace-management/cluster-size-info request to retrieve information about the default values associated with each cluster size.
-
Storage policy settings and file volume support. To specify the ID of the storage policy that you created to control the placement of the Supervisor control plane cache, use the master_storage_policy property . Optionally, you can activate the file volume support by using the cns_file_config property. See Enabling ReadWriteMany Support.
-
Load balancer. To specify the user-provisioned load balancer configuration for the cluster, use the load_balancer_config_spec parameter of the enable specification. You must specify the following parameters of the LoadBalancersTypes.ConfigSpec specification:
Parameter |
Description |
id |
A user-friendly name of the load balancer. The name must be an alphanumeric string with a maximum length of 63 characters which is unique across the namespaces in the vCenter Server instance. |
provider |
The type of the load balancer that you want to use. In vSphere 7.0 Update 2, you can choose between the HAProxy load balancer and the NSX Advanced Load Balancer. Pass as a value to this parameter one of the following constants: HA_PROXY or AVI. |
address_ranges |
The IP address ranges in CIDR format from which HAProxy allocates the IP addresses for the virtual servers. You must provide at least one IP range which is reserved by HAProxy. The CIDR range specified with this parameter must not overlap with the IPs allocated for the Kubernetes control planes and workloads. The IP range that you configure must be on a separate subnet. |
ha_proxy_config_create_spec |
The HAProxy runtime configuration. See Installing and Configuring the HAProxy Load Balancer. |
avi_config_create_spec |
The NSX Advanced Load Balancer configuration. See Using the NSX Advanced Load Balancer with vSphere Networking. |
-
Management network settings. Configure the network parameters for the Kubernetes control planes.
Parameter |
Description |
network_provider |
Specify the networking stack that must be used when the Supervisor is created. To use the vSphere network as the solution for the cluster, select VSPHERE_NETWORK. |
master_management_network |
Enter the cluster network specification for the Supervisor control plane. You must enter values for the following required properties:
-
network- Use the management network ID retrieved in Step 1.
-
mode- Set STATICRANGE or DHCP for the IPv4 address assignment mode. The DHCP mode allows an IPv4 address to be automatically assigned to the Supervisor control plane by a DHCP server. You must also set the floating IP address used by the HA primary cluster by using floating_IP. Use the DHCP mode only for test purposes. The STATICRANGE mode, allows the Supervisor control plane to have a stable IPv4 address and can be used in a production environment.
|
master_DNS |
Enter a list of the DNS server addresses that must be used from the Supervisor control plane. If your vCenter Server instance is registered with an FQDN, you must enter the IP addresses of the DNS servers that you use with the vSphere environment so that the FQDN is resolvable in the Supervisor. The list of DNS addresses must be specified in the order of preference. |
master_DNS_search_domains |
Set a list of domain names that DNS searches inside the Kubernetes control plane nodes, so that the DNS server can resolve them. Order the domains in the list by preference. |
master_NTP_servers |
Specify a list of IP addresses or DNS names of the NTP server that you use in your environment, if any. Make sure that you configure the same NTP servers for the vCenter Server instance, all hosts in the cluster, and vSphere IaaS control plane. If you do not set an NTP server, VMware Tools time synchronization is enabled. |
- Workload network settings. Configure the settings for the network that will handle the networking traffic for Kubernetes workloads running on the Supervisor.
Parameter |
Description |
service_cidr |
Specify the CIDR block from which the IP addresses for Kubernetes services are allocated. The IP range must not overlap with the ranges of the vSphere Pods, ingress, egress, or other services running in the data center. For the Kubernetes services and the vSphere Pods, you can use the default values which are based on the cluster size that you specify. |
workload_networks_spec |
Enter the workload networks specifications for the cluster. To configure the primary workload network that is used to expose the Supervisor control plane to DevOps and other workloads, create a NetworksType.CreateSpec / NetworksTypes.CreateSpec instance. Enter the following parameters of the vSphere Distributed Switch:
- network. The name of the vSphere Distributed Switch that is associated with the hosts in the cluster. The name must be a unique alphanumeric string that does not exceed 63 characters.
- network_provider. Pass VSPHERE_NETWORK as value to this parameter.
- vsphere_network. Optionally, you can create a vsphere_DVPG_network_create_spec instance to describe the configuration of the namespace network backed by the vSphere Distributed port group. You must define the following parameters for the vSphere Distributed port group specification:
- portgroup. Specify the port group that serves as the primary network to the Supervisor.
- address_ranges. Set the IP range for allocating IP addresses for the Kubernetes control planes and workloads. You must use unique IP ranges for each workload network.
- gateway. Set the gateway for the primary network.
- subnet_mask. Specify the subnet mask of the network.
|
- Content library settings. Add the subscribed content library that contains the VM images for deploying the nodes of Tanzu Kubernetes clusters. See Creating, Securing, and Synchronizing Content Libraries for Tanzu Kubernetes Releases.
To set the library, use default_kubernetes_service_content_library and pass the subscribed content library ID.
- Enable the Supervisor by passing the enable specification to the Clusters service.
Results
A task runs on vCenter Server for enabling vSphere IaaS control plane on the cluster. Once the task completes, three Kubernetes control planes are created on the hosts that are part of the cluster.
What to do next
Create and configure namespaces on the Supervisor.