Note: Ensure
NSX components (controllers, transport nodes, and so on, including management interfaces on hosts as well as on
NSX Edge nodes (bare metal and VM form factors) and compute hosts have IP Connectivity between them.
VMware vCenter Infrastructure Checklist
- At least four vSphere hosts must be available on management and compute host clusters in vCenter. This requirement is a best practice for vSphere HA and vSphere Dynamic Resource Scheduling (DRS) features.
Consider these points related to workloads:
- To be resilient to rack failures, spread clustered elements between racks.
- Use DRS anti-affinity rules to spread the workloads across racks.
- Enable vSphere DRS, HA, vSAN capabilities on existing host clusters.
- Do one of the following:
- Configure management VDS to span the management cluster and Configure NSX Edge cluster and compute VDS to span the compute clusters.
- Configure a single VDS to span both management and compute clusters.
- VDS portgroups for NSX Appliances: NSX Manager appliances are deployed on a hypervisor backed by a standard VLAN backed portgroup.
- VDS portgroups for ESXi Hypervisors:
- A vmk0 portgroup for ESXi host mgmt.
- A vmotion portgroup for vmotion traffic.
- A storage portgroup for storage traffic.
Note: All ESXi vmk interfaces can reside on the same portgroup or different management portgroups. Optionally, the vmks can be configured with a VLAN ID to provide logical isolation between traffic types. - VDS trunk portgroups for NSX Edge nodes (when using VM form factor for Edges):
- A management VLAN trunk portgroup for NSX Edge management.
- NSX Edge VLAN trunk portgroups to connect to DPDK fast-path interfaces. In NSX v3.2 and later, you can enable up to four datapath interfaces on NSX Edge VMs.
- Configure one NSX Edge trunk portgroup to connect to all NSX Edge fast-path interfaces.
- Alternatively, configure multiple VLAN trunk portgroups pinned to specific VLAN IDs. And configure one active uplink (with a standby uplink) to steer VLAN traffic to a specific TOR through the specified uplink. Then connect each NSX Edge fast-path interface to separate NSX Edge trunk portgroups in vCenter.
- Change VDS MTU from default value 1500 bytes to 1700 or 9000 bytes (recommended value). See Guidance to Set Maximum Transmission Unit.
- Create VDS portgroups for Tunnel Endpoints (TEPs): Hypervisor TEP VLAN and NSX Edge TEP VLAN can share same VLAN ID if TEPS are connected to NSX VLAN segments. Otherwise, use different VLANs for the TEPs.
Note: Management hypervisors do not need TEP VLAN as these do not need to be configured as transport nodes.
NSX Infrastructure Checklist
- For each of the three NSX Manager nodes that you will deploy in the management cluster, set up reserve IP addresses and FQDN.
- Ensure IP address and FQDN used as VIP of NSX Manager are in same subnet as the NSX appliances IP address. If you are using an external Load Balancer for VIP, then use a VIP from a different subnet.
- Configure LDAP server.
- Configure Backup server.
- Configure Syslog server.
- Configure DHCP server or DHCP relay server.
- (Optional) Configure CA-signed certificates to apply to each NSX Manager node and NSX Manager cluster VIP. The default value is self-signed.
Note: After you configure CA-signed certificates, you must configure FQDN for NSX Manager appliances, VIP and vCenter.
- (optional) If NSX Edge appliances are going to connect to NSX VLAN segments (instead of a VDS on vCenter), then reserve VLAN IDs to be used when you create VLAN segments for NSX Edge management and NSX Edge DPDK fast-path interfaces.
- Configure IP Pools that can be used when you configure TEPs for compute hypervisors.
- Configure IP Pools that can be used when you configure TEPs for NSX Edge bare metal (physical server) or NSX Edge VMs.
- (Optional) Global MTU.
- (Optional) Partial Patch support.
- (Optional) Configure NSX failure domain for the NSX Edge clusters.
- (Optional) Cluster FQDN.
- If configuring single tier or multi-tier topology in NSX, then following additional configuration required.:
- Configure Tier-0 Gateway external interfaces (uplink interfaces): Reserve at least two VLANs that will be used as NSX Edge uplinks to TOR and configured as Tier-0 gateway interfaces.
- The Service Router component of the Tier-0 Gateways gets instantiated on Edge TN Cluster node upon creation of external interface. Example: If a Tier-0 gateway resides on NSX Edge cluster with four NSX Edge nodes, create at least two VLAN segments that will act as two uplinks to each of the four edge nodes.
Example:
Create two VLAN segments:
segment-vlan2005
segment-vlan2006
Create the first external interfaces on the Tier-0 gateway using these segments.
Name: EXT-INT-VLAN-2005
Type: External
IP Address Mask: VLAN-X CIDR address
Connected To(Segment): segment-vlan2005
NSX Edge Node: Edge-1, Edge-2, Edge-3, Edge-4
Create the second external interfaces on the Tier-0 gateway using these segments.
Name: EXT-INT-VLAN-2006
Type: External
IP Address Mask: VLAN-Y CIDR address
Connected To (Segment): segment-vlan2006
NSX Edge Node: Edge-1, Edge-2, Edge-3, Edge-4
- Configure dedicated BGP peers (Remote/Local ASN no) or static routing on Tier0 gateway interfaces.
- Enable BFD per BGP neighbor for faster failover. Ensure BGP and BFD configuration is between ToR switches andNSX Edge nodes.
- Set route redistribution for T0 and T1 routes.
- Reserve VLAN IDs and IP addresses to be used to configure a Service Interface. It connects VLAN-backed segments or logical switches to VLAN-backed physical or virtual workloads. This interface acts as a gateway for these VLAN backed workloads and is supported both on Tier-0 and Tier-1 Gateways configured in active/standby HA configuration mode.