Check out the system requirements for enabling a Supervisor on three vSphere clusters mapped to vSphere Zones by using the NSX networking stack and NSX Advanced Load Balancer.
Placement of vSphere Zones Across Physical Sites
You can distribute vSphere zones across different physical sites as long as the latency between the sites doesn't exceed 100 ms. For example, you can distribute the vSphere zones across two physical sites - one vSphere zone on the first site, and two vSphere zones on the second site.
NSX Deployment Options
For more information on the best practices for deploying NSX, see the NSX Reference Design Guide.
Minimum Compute Requirements for a Management and Edge Cluster
System | Minimum Deployment Size | CPU | Memory | Storage |
---|---|---|---|---|
vCenter Server 8 | Small | 2 | 21 GB | 290 GB |
ESXi hosts 8 | 2 ESXi hosts | 8 | 64 GB per host | Not applicable |
NSX Manager | Medium | 6 | 24 GB | 300 GB |
NSX Edge 1 | Large | 8 | 32 GB | 200 GB |
NSX Edge 2 | Large | 8 | 32 GB | 200 GB |
Service Engine VMs | At least two Service Engine VMs are deployed per Supervisor | 1 | 2 GB | N/A |
Specifying System Capacity of Controller
You can specify the system capacity of the Controller during deployment. The system capacity is based on allocations of system resources such as, CPU, RAM, and disk. The amount of resources you allocate have an impact on the performance of the Controller.
Deployment Type | Node Count | Recommended Allocations - CPU | Recommended Allocations - Memory | Recommended Allocations - Disk |
---|---|---|---|---|
Demo / Customer Evaluation | 1 | 6 | 24 GB | 128 GB |
In demonstration deployments, a single Controller is adequate and is used for all control-plane activities and workflows, and for analytics.
In a production deployment, a three-node cluster is recommended.
For more information, see NSX Advanced Load Balancer Controller Sizing.
Minimum Compute Requirements for Workload Domain Clusters
System | Minimum Deployment Size | CPU | Memory | Storage |
---|---|---|---|---|
vSphere clusters |
|
Not applicable | Not applicable | Not applicable |
ESXi hosts 8 |
For each vSphere cluster:
Note: Make sure that the names of the hosts that join the clusters use lower case letters. Otherwise, the activation of the
Supervisor might fail.
|
8 | 64 GB per host | Not applicable |
Kubernetes control plane VMs | 3 | 4 | 16 GB | 16 GB |
Networking Requirements
Check the VMware Product Interoperability Matrix for the supported NSX versions.
Component | Minimum Quantity | Required Configuration |
---|---|---|
Layer 2 device | 1 | The management network that will handle the traffic of the Supervisor must be on the same layer 2 device. As least one physical NIC per host handling management traffic must be connected to the same layer 2 device. |
Physical Network MTU | 1700 | The MTU size must be 1700 or greater on any vSphere Distributed Switch port group. |
Physical NIC | At least 2 physical NICs per host if vSAN is used | To use Antrea CNI and for optimal NSX performance, each physical NIC on each participating ESXi host must support GENEVE encapsulation and have it enabled. |
Component | Minimum Quantity | Required Configuration |
---|---|---|
Latency | 100 ms | The maximum recommended latency between each cluster that is part of a vSphere Zone joined together in a Supervisor. |
NTP and DNS Server | 1 | A DNS server and NTP server that can be used with vCenter Server.
Note: Configure NTP on all ESXi hosts and
vCenter Server .
|
DHCP Server | 1 | Optional. Configure a DHCP server to automatically acquire IP addresses for the Management and Workload Networks as well as floating IPs. The DHCP server must support Client Identifiers and provide compatible DNS servers, DNS search domains, and an NTP server. For the management network, all the IP addresses, such as control plane VM IPs, a Floating IP, DNS servers, DNS, search domains, and NTP server are acquired automatically from the DHCP server. The DHCP configuration is used by the Supervisor. Load balancers may require static IP addresses for Management. DHCP Scopes should not overlap these static IP’s. DHCP is not used for virtual IPs. (VIPs) |
Image Registry | 1 | Access to a registry for service. |
Component | Minimum Quantity | Required Configuration |
---|---|---|
Static IPs for Kubernetes control plane VMs | Block of 5 | A block of 5 consecutive static IP addresses to be assigned from the Management Network to the Kubernetes control plane VMs in the Supervisor. |
Management traffic network | 1 | A Management Network that is routable to the ESXi hosts, vCenter Server, the Supervisor and load balancer. |
Management Network Subnet | 1 |
The subnet used for management traffic between ESXi hosts and
vCenter Server, NSX Appliances, and the Kubernetes control plane. The size of the subnet must be the following:
Note: The Management Network and the Workload Network must be on different subnets. Assigning the same subnet to the Management and the Workload networks is not supported and can lead to system errors and problems.
|
Management Network VLAN | 1 | The VLAN ID of the Management Network subnet. |
Component | Minimum Quantity | Required Configuration |
---|---|---|
vSphere Pod CIDR range | /23 Private IP addresses | A private CIDR range that providers IP addresses for vSphere Pods. These addresses are also used for the TKG cluster nodes.
You must specify a unique
vSphere Pod CIDR range for each cluster.
Note: The
vSphere Pod CIDR range and the CIDR range for the Kubernetes service addresses must not overlap.
|
Kubernetes services CIDR range | /16 Private IP addresses | A private CIDR range to assign IP addresses to Kubernetes services. You must specify a unique Kubernetes services CIDR range for each Supervisor. |
Egress CIDR range | /27 Static IP Addresses | A private CIDR annotation to determine the egress IP for Kubernetes services. Only one egress IP address is assigned for each namespace in the Supervisor. The egress IP is the address that external entities use to communicate with the services in the namespace. The number of egress IP addresses limits the number of egress policies the Supervisor can have.
The minimum is a CIDR of /27 or more. For example,
10.174.4.96/27
Note: Egress IP addresses and ingress IP addresses must not overlap.
|
Ingress CIDR | /27 Static IP Addresses | A private CIDR range to be used for IP addresses of ingresses. Ingress lets you apply traffic policies to requests entering the Supervisor from external networks. The number of ingress IP addresses limits the number of ingresses the cluster can have.
The minimum is a CIDR of /27 or more.
Note: Egress IP addresses and ingress IP addresses must not overlap.
|
Namespace Networks Range | 1 | One or more IP CIDRs to create subnets/segments and assign IP addresses to workloads. |
Namespace Subnet Prefix | 1 | The subnet prefix that specifies the size of the subnet reserved for namespaces segments. Default is 28. |
Component | Minimum Quantity | V |
---|---|---|
VLANs | 3 | These VLAN IPs are the IP addresses for the tunnel endpoints (TEPs). The ESXi host TEPs and the Edge TEPs must be routable.
VLAN IP addresses are required for the following:
Note: The ESXi Host VTEP and the Edge VTEP must have MTU size greater than 1600.
ESXi hosts and NSX-T Edge nodes act as tunnel end points, and a Tunnel End Point (TEP) IP is assigned to each host and Edge node. As the TEP IPs for ESXi hosts create an overlay tunnel with TEP IPs on the Edge nodes, the VLAN IPs should be routable. An additional VLAN is required to provide North-South connectivity to Tier-0 gateway. IP pools can be shared across clusters. However, host overlay IP pool/VLAN must not be shared with Edge overlay IP pool/VLAN.
Note: If host TEP and Edge TEP are using different physical NICs, they can use the same VLAN.
|
Tier-0 Uplink IP | /24 Private IP addresses | The IP subnet used for the Tie-0 uplink. The requirements for the IP address of the Tier-0 uplink are as follows:
The Edge Management IP, subnet, gateway, Uplink IP, subnet, gateway must be unique. |
NTP and DNS Server | 1 | DNS server IP is required for the NSX Advanced Load Balancer Controller to resolve the vCenter Server and ESXi hostnames correctly. NTP is optional as public NTP servers are used by default. |
Data Network Subnet | 1 | The data interface of the Service Engines, also called Service Engines, connect to this network. Configure a pool of IP addresses for the Service Engines. The load balancer Virtual IPs (VIPs) are assigned from this network. |
NSX Advanced Load Balancer Controller IPs | 1 or 4 | If you deploy the NSX Advanced Load Balancer Controller as a single node, one static IP is required for its management interface. For a 3 node cluster, 4 IP addresses are required. One for each Controller VM and one for the cluster VIP. These IPs must be from the Management Network subnet. |
VIP IPAM range | - | A private CIDR range to assign IP addresses to Kubernetes services. The IPs must be from the Data Network Subnet. You must specify a unique Kubernetes services CIDR range for each Supervisor Cluster. |
Ports and Protocols
This table lists the protocols and ports required for managing IP connectivity between the NSX Advanced Load Balancer, vCenter Server and other vSphere IaaS control plane components.
Source | Destination | Protocol and Ports |
---|---|---|
NSX Advanced Load Balancer Controller | NSX Advanced Load Balancer Controller (in cluster) | TCP 22 (SSH) TCP 443 (HTTPS) TCP 8443 (HTTPS) |
Service Engine | Service Engine in HA | TCP 9001 for VMware, LSC and NSX-T cloud |
Service Engine | NSX Advanced Load Balancer Controller | TCP 22 (SSH) TCP 8443 (HTTPS) UDP 123 (NTP) |
NSX Advanced Load Balancer Controller | vCenter Server, ESXi, NSX-T Manager | TCP 443 (HTTPS) |
Supervisor Control plane nodes (AKO) | NSX Advanced Load Balancer Controller | TCP 443 (HTTPS) |
For more information about ports and protocols for the NSX Advanced Load Balancer, see https://ports.esp.vmware.com/home/NSX-Advanced-Load-Balancer.