To configure vSphere with Tanzu with the NSX Advanced Load Balancer, also known as the Avi Load Balancer, your environment must meet certain requirements. vSphere with Tanzu supports multiple topologies for Avi networking: a single VDS network for the Avi Service Engine and load balancer services, and a VDS for the Avi management plane and another VDS for the NSX Advanced Load Balancer.
Workload Networks
To configure a Supervisor Cluster with the vSphere Networking stack, you must connect all hosts from the cluster to a vSphere Distributed Switch. Depending on the topology that you implement for the Supervisor Cluster, you create one or more distributed port groups. You designate the port groups as Workload Networks to vSphere Namespaces.
Before you add a host to a Supervisor Cluster, you must add it to all the vSphere Distributed Switches that are part of the cluster.
Workload Networks provide connectivity to the nodes of Tanzu Kubernetes clusters and to Supervisor Cluster control plane VMs. The Workload Network that provides connectivity to Kubernetes control plane VMs is called a Primary Workload Network. Each Supervisor Cluster must have one Primary Workload Network. You must designate one of the distributed port group as the Primary Workload Network to the Supervisor Cluster.
The Kubernetes control plane VMs on the Supervisor Cluster use three IP addresses from the IP address range that is assigned to the Primary Workload Network. Each node of a Tanzu Kubernetes cluster has a separate IP address assigned from the address range of the Workload Network that is configured with the namespace where the Tanzu Kubernetes cluster runs.
Networking Requirements
- The Management Network. The Management Network is where the Avi Controller, also called the Controller, resides. The Management Network provides the Controller with connectivity to the vCenter Server, ESXi hosts, and the Supervisor Cluster control plane nodes. This network is where the management interface of Avi Service Engine is placed. This network requires a vSphere Distributed Switch (VDS) and a distributed port group.
- The Data Network. The data interface of the Avi Service Engines, also called Service Engines, connect to this network. The load balancer Virtual IPs (VIPs) are assigned from this network. This network requires a vSphere Distributed Switch (VDS) and distributed port groups. You must configure the VDS and port groups before installing the load balancer.
Allocation of IP Addresses
The Controller and the Service Engine are connected to the Management Network. When you install and configure the NSX Advanced Load Balancer, provide a static, routable IP address for each Controller VM.
The Service Engines can use DHCP. If DHCP is unavailable, you can configure a pool of IP addresses for the Service Engines.
For more information, see Configure Static Routes.
Minimum Compute Requirements
System | Minimum Deployment Size | CPU | Memory | Storage |
---|---|---|---|---|
vCenter Server 7.0, 7.0.2, 7.0.3 | Small | 2 | 16 GB | 290 GB |
ESXi hosts 7.0 | 3 ESXi hosts with 1 static IP per host. If you are using vSAN: 3 ESXi hosts with at least 2 physical NICs is the minimum; however, 4 ESXi hosts are recommended for resiliency during patching and upgrading.
The hosts must be joined in a cluster with vSphere DRS and HA enabled. vSphere DRS must be in Fully Automate or Partially Automate mode.
Note: Make sure that the names of the hosts that join the cluster use lower case letters. Otherwise, the enablement of the cluster for Workload Management might fail.
|
8 | 64 GB per host | Not applicable |
Kubernetes control plane VMs | 3 | 4 | 16 GB | 16 GB |
Avi Controller | Essentials Enterprise
Note: For smaller deployments, you can deploy the Essentials size controller as a single controller node. You can create an Avi controller cluster, but it does not have any performance benefits and it defeats the purpose of low resource utilization. You can use a remote backup for disaster recovery in this case. This size must only be used with Avi Essentials licensing mode and is limited to 50 virtual services and 10 Service Engines
For production environments, it is recommended to install a cluster of 3 Avi Controller VMs. A minimum of 2 Service Engine VMs are required for HA. |
4 8 |
12 GB 24 GB |
128 GB 128 GB |
Service Engine | A minimum of 2 Service Engine VMs are required for HA. | 1 | 2 GB | 15 GB |
Minimum Network Requirements
Component | Minimum Quantity | Required Configuration |
---|---|---|
Static IPs for Kubernetes control plane VMs | Block of 5 | A block of 5 consecutive static IP addresses to be assigned from the Management Network to the Kubernetes control plane VMs in the Supervisor Cluster. |
Management traffic network | 1 | A Management Network that is routable to the ESXi hosts, vCenter Server, the Supervisor Cluster and load balancer. The network must be able to access an image registry and have Internet connectivity if the image registry is on the external network. The image registry must be resolvable through DNS. |
vSphere Distributed Switch 7.0 or later | 1 | All hosts from the cluster must be connected to a vSphere Distributed Switch. |
Workload Networks | 1 | At least one distributed port group must be created on the vSphere Distributed Switch that you configure as the Primary Workload Network. Depending on the topology of choice, you can use the same distributed port group as the Workload Network of namespaces or create more port groups and configure them as Workload Networks. Workload Networks must meet the following requirements:
|
NTP and DNS Server | 1 | DNS server IP is required for the Avi Controller to resolve the vCenter Server and ESXi hostnames correctly. NTP is optional as public NTP servers are used by default.
Note: Configure NTP on all ESXi hosts and
vCenter Server .
Important: The DNS Server must be different for the Management and Workload Networks. Assigning the same DNS Server to both networks can lead to issues while enabling
Workload Management and pulling images.
The DNS Server of the Management Network must be reachable from the Management Network and the DNS Server of the Workload Network must be reachable from the Workload. |
DHCP Server | 1 | Optional. Configure a DHCP server to automatically acquire IP addresses for the Management and Workload Networks as well as floating IPs. The DHCP server must support Client Identifiers and provide compatible DNS servers, DNS search domains, and an NTP server. For the Management Network, all the IP addresses, such as control plane VM IPs, a Floating IP, DNS servers, DNS, search domains, and NTP server are acquired automatically from the DHCP server. The DHCP configuration is used by the Supervisor Cluster. Load balancers may require static IP addresses for Management. DHCP Scopes should not overlap these static IP’s. DHCP is not used for virtual IPs. (VIPs) |
Management Network Subnet | 1 | The Management Network is where the Avi Controller, also called the Controller, resides. It is also where the Service engine management interface is connected. The Avi Controller must be connected to the vCenter Server and ESXi management IPs from this network
Important: The Management Network and the Workload Network must be on different subnets. Assigning the same subnet to the Management and the Workload Networks is not supported and can lead to system errors and problems.
|
Data Network Subnet | 1 | The data interface of the Avi Service Engines, also called Service Engines, connect to this network. Configure a pool of IP addresses for the Service Engines. The load balancer Virtual IPs (VIPs) are assigned from this network. |
Physical Network MTU | 1500 | The MTU size must be 1500 or greater on any vSphere Distributed Switch port group. |
vSphere Pod CIDR range | /24 Private IP addresses | A private CIDR range that providers IP addresses for vSphere Pods. |
Avi Controller IPs | 1 or 4 | If you deploy the Avi controller as a single node, one static IP is required for its management interface. For a 3 node cluster, 4 IP addresses are required. One for each Avi controller VM and one for the cluster VIP. These IPs must be from the Management Network subnet. |
VIP IPAM range | - | A private CIDR range to assign IP addresses to Kubernetes services. The IPs must be from the Data Network Subnet. You must specify a unique Kubernetes services CIDR range for each Supervisor Cluster. |
Ports and Protocols
This table lists the protocols and ports required for managing IP connectivity between the NSX Advanced Load Balancer, vCenter and other vSphere with Tanzu components.
Source | Destination | Protocol and Ports |
---|---|---|
Avi Controller | Avi Controller (in cluster) | TCP 22 (SSH) TCP 443 (HTTPS) TCP 8443 (HTTPS) |
Service Engine | Service Engine in HA | TCP 9001 for VMware, LSC and NSX-T cloud |
Service Engine | Avi Controller | TCP 22 (SSH) TCP 8443 (HTTPS) UDP 123 (NTP) |
Avi Controller | vCenter Server, ESXi, NSX-T Manager | TCP 443 (HTTPS) |
Supervisor Control plane nodes (AKO) | Avi Controller | TCP 443 (HTTPS) |
For more information about ports and protocols for the NSX Advanced Load Balancer, see https://ports.esp.vmware.com/home/NSX-Advanced-Load-Balancer.