Review the system requirements for configuring vSphere IaaS control plane on a vSphere cluster by using the NSX networking stack. When you enable a vSphere cluster as a Supervisor, a vSphere zone is automatically created for the Supervisor.
Besides these requirements, see NSX Reference Design Guide for more information on the best practices for deploying NSX.
Minimum Compute Requirements for the Management and Edge Cluster
System | Minimum Deployment Size | CPU | Memory | Storage |
---|---|---|---|---|
vCenter Server 8 | Small | 2 | 21 GB | 290 GB |
ESXi hosts 8 | 2 ESXi hosts | 8 | 64 GB per host | Not applicable |
NSX Manager | Medium | 6 | 24 GB | 300 GB |
NSX Edge 1 | Large | 8 | 32 GB | 200 GB |
NSX Edge 2 | Large | 8 | 32 GB | 200 GB |
Minimum Compute Requirements for the Workload Domain Cluster
System | Minimum Deployment Size | CPU | Memory | Storage |
---|---|---|---|---|
vSphere clusters |
|
Not applicable | Not applicable | Not applicable |
ESXi hosts 8 |
Note: Make sure that the names of the hosts that join the clusters use lower case letters. Otherwise, the activation of the
Supervisor might fail.
|
8 | 64 GB per host | Not applicable |
Kubernetes control plane VMs | 3 | 4 | 16 GB | 16 GB |
Networking Requirements
Check the VMware Product Interoperability Matrix for the supported NSX versions.
Component | Minimum Quantity | Required Configuration |
---|---|---|
Physical Network MTU | 1500 | The MTU size must be 1500 or greater on any vSphere Distributed Switch port group. |
Physical NIC | At least 2 physical NICs per host if vSAN is used | To use Antrea CNI and for optimal NSX performance, each physical NIC on each participating ESXi host must support GENEVE encapsulation and have it enabled. |
Component | Minimum Quantity | Required Configuration |
---|---|---|
NTP and DNS Server | 1 | A DNS server and NTP server that can be used with vCenter Server.
Note: Configure NTP on all ESXi hosts and
vCenter Server .
|
DHCP Server | 1 | Optional. Configure a DHCP server to automatically acquire IP addresses for the Management and Workload Networks as well as floating IPs. The DHCP server must support Client Identifiers and provide compatible DNS servers, DNS search domains, and an NTP server. For the management network, all the IP addresses, such as control plane VM IPs, a Floating IP, DNS servers, DNS, search domains, and NTP server are acquired automatically from the DHCP server. The DHCP configuration is used by the Supervisor. Load balancers may require static IP addresses for Management. DHCP Scopes should not overlap these static IP’s. DHCP is not used for virtual IPs. (VIPs) |
Image Registry | 1 | Access to a registry for service. |
Component | Minimum Quantity | Required Configuration |
---|---|---|
Static IPs for Kubernetes control plane VMs | Block of 5 | A block of 5 consecutive static IP addresses to be assigned from the Management Network to the Kubernetes control plane VMs in the Supervisor. |
Management traffic network | 1 | A Management Network that is routable to the ESXi hosts, vCenter Server, the Supervisor and load balancer. |
Management network Subnet | 1 |
The subnet used for management traffic between ESXi hosts and
vCenter Server, NSX Appliances, and the Kubernetes control plane. The size of the subnet must be the following:
Note: The Management Network and the Workload Network must be on different subnets. Assigning the same subnet to the Management and the Workload networks is not supported and can lead to system errors and problems.
|
Management network VLAN | 1 | The VLAN ID of the Management Network subnet. |
Component | Minimum Quantity | Required Configuration |
---|---|---|
vSphere Pod CIDR range | /23 Private IP addresses | A private CIDR range that providers IP addresses for vSphere Pods. These addresses are also used for the TKG cluster nodes.
You must specify a unique
vSphere Pod CIDR range for each cluster.
Note: The
vSphere Pod CIDR range and the CIDR range for the Kubernetes service addresses must not overlap.
|
Kubernetes services CIDR range | /16 Private IP addresses | A private CIDR range to assign IP addresses to Kubernetes services. You must specify a unique Kubernetes services CIDR range for each Supervisor. |
Egress CIDR range | /27 Static IP Addresses | A private CIDR annotation to determine the egress IP for Kubernetes services. Only one egress IP address is assigned for each namespace in the Supervisor. The egress IP is the address that external entities use to communicate with the services in the namespace. The number of egress IP addresses limits the number of egress policies the Supervisor can have.
The minimum is a CIDR of /27 or more. For example,
10.174.4.96/27
Note: Egress IP addresses and ingress IP addresses must not overlap.
|
Ingress CIDR | /27 Static IP Addresses | A private CIDR range to be used for IP addresses of ingresses. Ingress lets you apply traffic policies to requests entering the Supervisor from external networks. The number of ingress IP addresses limits the number of ingresses the cluster can have.
The minimum is a CIDR of /27 or more.
Note: Egress IP addresses and ingress IP addresses must not overlap.
|
Namespace Networks Range | 1 | One or more IP CIDRs to create subnets/segments and assign IP addresses to workloads. |
Namespace Subnet Prefix | 1 | The subnet prefix that specifies the size of the subnet reserved for namespaces segments. Default is 28. |
Component | Minimum Quantity | Required Configuration |
---|---|---|
VLANs | 3 | These VLAN IPs are the IP addresses for the tunnel endpoints (TEPs). The ESXi host TEPs and the Edge TEPs must be routable.
VLAN IP addresses are required for the following:
Note: The ESXi Host VTEP and the Edge VTEP must have MTU size greater than 1600.
ESXi hosts and NSX-T Edge nodes act as tunnel end points, and a Tunnel End Point (TEP) IP is assigned to each host and Edge node. As the TEP IPs for ESXi hosts create an overlay tunnel with TEP IPs on the Edge nodes, the VLAN IPs should be routable. An additional VLAN is required to provide North-South connectivity to Tier-0 gateway. IP pools can be shared across clusters. However, host overlay IP pool/VLAN must not be shared with Edge overlay IP pool/VLAN.
Note: If host TEP and Edge TEP are using different physical NICs, they can use the same VLAN.
|
Tier-0 Uplink IP | /24 Private IP addresses | The IP subnet used for the Tie-0 uplink. The requirements for the IP address of the Tier-0 uplink are as follows:
The Edge Management IP, subnet, gateway, Uplink IP, subnet, gateway must be unique. |