You can enable a Supervisor with vSphere networking or NSX to provision connectivity to Kubernetes control planes, services, and workloads.
A Supervisor that uses the vSphere networking stack is backed by a vSphere Distributed Switch and requires a load balancer to provide connectivity to DevOps users and external services. The NSX Advanced Load Balancer and the HAProxy load balancers are supported for vSphere 7.0 Update 2.
A Supervisor that is configured with NSX, uses the software-based networks of the solution and an NSX Edge load balancer to provide connectivity to external services and DevOps users.
Configuring NSX for vSphere with Tanzu
vSphere with Tanzu requires specific networking configuration to allow you to connect to the Supervisors, vSphere Namespaces, and all objects that run inside the namespaces.
Follow the instructions for installing and configuring the NSX for managing Kubernetes workloads documented in the Installing and Configuring vSphere with Tanzu guide.
First, you need to create a vSphere Distributed Switch and a distributed port group for each NSX Edge uplink. To automate this step, use the Web Services APIs as described in the vSphere Web Services SDK Programming Guide. Then, you can use the NSX REST APIs to add a compute manager, create transport zones, and perform other steps required for configuring the NSX for vSphere with Tanzu.
Configuring the vSphere Networking Stack for vSphere with Tanzu
To configure a Supervisor with the vSphere networking stack, you must connect all hosts from the cluster to a vSphere Distributed Switch. Depending on your topology, you must create one or more distributed port groups on the switch and configure them as workload networks to the vSphere Namespaces on the cluster.
Workload networks provide connectivity to the nodes of Tanzu Kubernetes clusters and to the Supervisor control planes. The workload network that provides connectivity to Supervisor control planes is called primary workload network. Each Supervisor must have one primary workload network represented by a distributed port group.
The Supervisor control planes on the cluster use three IP addresses from the IP address range that is assigned to the primary workload network. Each node of a Tanzu Kubernetes cluster has a separate IP address assigned from the address range of the workload network that is configured with the namespace where the Tanzu Kubernetes cluster runs.
To create a vSphere Distributed Switch and port groups for configuring the vSphere networking stack of a Supervisor, you can use the vSphere Web Services APIs as described in the vSphere Web Services SDK Programming Guide documentation. When you create a distributed virtual switch, vCenter Server automatically creates one distributed virtual port group. You can use this port group as the primary workload network and use it to handle the traffic for the Supervisor control planes. Then you can create as many distributed port groups for the workload networks as your topology requires. For a topology with one isolated workload network, create one distributed port group that you will use as a network for all namespaces on the Supervisor. For a topology with isolated networks for each vSphere Namespace, create the same number of distributed port groups as the number of namespaces.
To list all workload networks available for a Supervisor and retrieve information about the configuration of a specific workload network, use the Networks service from the vSphere Automation APIs. To associate a vSphere Distributed port group to a workload network, set the necessary information through the setVsphereNetwork(NetworksTypes.VsphereDVPGNetworkSetSpec vsphereNetwork) parameter of the workload network SetSpec object. Use the NetworksTypes.VsphereDVPGNetworkSetSpec class to describe the configuration or retrieve information about the current configuration of the vSphere Distributed port group of a specific workload network.
If you want to retrieve a list of the distributed switches compatible with vSphere with Tanzu on a vCenter Server system, use the DistributedSwitchCompatibility service and filter the available switches by using VSPHERE_NETWORK as the networking provider.
Installing and Configuring the HAProxy Load Balancer
You can use the vSphere Automation APIs to customize the HAProxy control plane VM after you install the HAProxy in your vSphere with Tanzu environment.
If you use the vSphere networking stack in your vSphere with Tanzu environment, you need to supply your own load balancer. You can use the open source implementation of the HAProxy load balancer that VMware provides.
For more information about the prerequisites for installation and the deployment procedure through the vSphere Client, see the Installing and Configuring vSphere with Tanzu documentation.
You can use the vSphere Automation APIs to install and configure the HAProxy load balancer. You can download the latest version of the HAProxy OVA file from the VMware-HAProxy site to a content library item. For more information about how to achieve this task, see Upload an OVF or OVA Package from a Local File System to a Library Item. Then you can create a new VM from the OVA template in the content library as described in Deploy a Virtual Machine or vApp from an OVF Template in a Content Library.
Parameter | Description |
---|---|
setServers(List<LoadBalancersTypes.Server> servers) | A list of Servers that represent the endpoints for configuring the HAProxy load balancers. Each endpoint is described by a load balancer IP address and a Data Plane API management port. Each endpoint must be described with the port on the HAProxy VM on which the Data Plane API service listens. The Data Plane API service controls the HAProxy server and runs inside the HAProxy VM. The default port is 5556. Port 22 is reserved for SSH. |
setUsername(String username) | The administrator user name that is configured with the HAProxy OVA file and is used to authenticate to the HAProxy Data Plane API server. |
setPassword(char[] password) | The password for the administrator user name. |
setCertificateAuthorityChain(String certificateAuthorityChain) | The certificate in PEM format that is signed or is a trusted root of the server certificate that the Data Plane API server presents. |
Using the NSX Advanced Load Balancer with vSphere Networking
If you use the vSphere networking stack for workload management, you can install and configure the NSX Advanced Load Balancer, also known as Avi Load Balancer, Essentials Edition, to support the Tanzu Kubernetes clusters.
For more information about how to install and configure the NSX Advanced Load Balancerthrough the vSphere Client, see the Installing and Configuring vSphere with Tanzu documentation.
You can use the vSphere Automation APIs to deploy the Avi Controller on your vSphere Management network. You can upload the latest version of the NSX Advanced Load Balancer to a library item from your local file system or from a URL. For more information about how to achieve this task, see Upload an OVF or OVA Package from a Local File System to a Library Item. Then you can deploy the Controller VM on your vSphere Management network from the OVA template in the content library as described in Deploy a Virtual Machine or vApp from an OVF Template in a Content Library.
To configure the NSX Advanced Load Balancer settings, create a LoadBalancers.AviConfigCreateSpec instance and use the following parameters.
Parameter | Description |
---|---|
setServer(LoadBalancersTypes.Server server) | The address of the Avi Controller that is used to configure virtual services. |
setUsername(java.lang.String username) | The administrator user name that is used for accessing the Controller VM of the NSX Advanced Load Balancer. |
setPassword(char[] password) | The password for the administrator user name. |
setCertificateAuthorityChain(java.lang.String certificateAuthorityChain) | The certificate in PEM format that is used by the Controller. You can use the certificate that you assigned during the configuration of the NSX Advanced Load Balancer. |