Use this design decision list for reference related to the configuration of NSX-T Data Center in an environment with a single or multiple VMware Cloud Foundation instances. The design also considers if an instance contains a single or multiple availability zones.
The NSX-T Data Center design covers the following areas:
-
Physical network infrastructure
-
Deployment of and secure access to the NSX-T Data Center nodes
-
Dynamic routing configuration and load balancing
-
NSX segment organization
-
NSX Federation
After you set up the physical network infrastructure, the configuration tasks for most design decisions are automated in VMware Cloud Foundation. You must perform the configuration manually only for a limited number of decisions as noted in the design implication.
For full design details, see NSX-T Data Center Design for the Management Domain.
Physical Network Infrastructure Design
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-PHY-001 |
Use two ToR switches for each rack. |
Supports the use of two 10 GbE (25 GbE or greater recommended) links to each server, provides redundancy and reduces the overall design complexity. |
Requires two ToR switches per rack which can increase costs. |
VCF-MGMT-NSX-PHY-002 |
Implement the following physical network architecture:
|
|
|
VCF-MGMT-NSX-PHY-003 |
Do not use EtherChannel (LAG, LACP, or vPC) configuration for ESXi host uplinks |
|
None. |
VCF-MGMT-NSX-PHY-004 |
Use a physical network that is configured for BGP routing adjacency |
|
Requires BGP configuration in the physical network. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-PHY-005 |
Assign persistent IP configurations for NSX tunnel endpoints (TEPs) that use dynamic IP allocation. |
Ensures that endpoints have a persistent management IP address. In VMware Cloud Foundation, TEP IP assignment over DHCP is required for advanced deployment topologies such as a management domain with multiple availability zones. |
Requires precise IP address management. |
VCF-MGMT-NSX-PHY-006 |
Set the lease duration for the DHCP scope for the host overlay network to at least 7 days. |
The IP addresses of the host overlay VMkernel ports are assigned by using a DHCP server.
|
Requires configuration and management of a DHCP server. |
VCF-MGMT-NSX-PHY-007 |
Use VLANs to separate physical network functions. |
|
Requires uniform configuration and presentation on all the trunks that are made available to the ESXi hosts. |
Decision ID |
Design Decision |
Decision Justification |
Decision Implication |
---|---|---|---|
SDDC-MGMT-VI-PHY-008 |
Set the MTU size to at least 1,700 bytes (recommended 9,000 bytes for jumbo frames) on the physical switch ports, vSphere Distributed Switches, vSphere Distributed Switch port groups, and N-VDS switches that support the following traffic types:
|
|
When adjusting the MTU packet size, you must also configure the entire network path (VMkernel network adapters, virtual switches, physical switches, and routers) to support the same MTU packet size. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-PHY-009 |
Set the MTU size to at least 1,700 bytes (recommended 9,000 bytes for jumbo frames) on physical inter-availability zone networking components which are part of the networking path between availability zones for the following traffic types.
|
|
When adjusting the MTU packet size, you must also configure the entire network path (VMkernel ports, virtual switches, physical switches, and routers) to support the same MTU packet size. In an environment with multiple availability zones, the MTU must be configured on the entire network path between the zones. |
VCF-MGMT-NSX-PHY-010 |
Configure VRRP, HSRP, or another Layer 3 gateway availability method for these networks.
|
Ensures that the VLANs that are stretched between availability zones are connected to a highly- available gateway. Otherwise, a failure in the Layer 3 gateway will cause disruption in the traffic in the SDN setup. |
Requires configuration of a high availability technology for the Layer 3 gateways in the data center. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-PHY-011 |
Set the MTU size to at least 1,500 bytes (1,700 bytes preferred, 9,000 bytes recommended for jumbo frames) on the components of the physical network between the VMware Cloud Foundation instances which are part of the network path between availability zones for the following traffic types.
|
|
When adjusting the MTU packet size, you must also configure the entire network path, that is, virtual interfaces, virtual switches, physical switches, and routers to support the same MTU packet size. |
VCF-MGMT-NSX-PHY-012 |
Provide a routed connection between each NSX Manager cluster in each VMware Cloud Foundation instance. |
Configuring NSX Federation requires connectivity between the NSX Global Manager instances, NSX Local Manager instances, and NSX Edge clusters. |
Requires unique routable IP addresses for each fault domain. |
VCF-MGMT-NSX-PHY-013 |
Ensure that the latency between VMware Cloud Foundation instances is less than 150 ms |
A latency lower than 150 ms is required for the following features.
|
None. |
VCF-MGMT-NSX-PHY-014 |
Provide BGP routing between all VMware Cloud Foundation instance. |
Automated failover of networks requires a dynamic routing protocol, such as BGP. |
None. |
NSX Manager Deployment Specification
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-CFG-001 |
Deploy three NSX Manager nodes in the default vSphere cluster in the management domain for configuring and managing the network services for the management domain. |
The management components can be placed on isolated virtual networks, using load balancing, logical switching, dynamic routing, and logical firewalls services. |
|
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-CFG-002 |
Deploy each node in the NSX Manager cluster for the management domain as a medium-size appliance or larger. |
A medium-size appliance is sufficient for providing network services to the management domain components. |
If you extend the management domain, increasing the size of the NSX Manager appliances might be required. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-CFG-003 |
Create a virtual IP (VIP) address for the NSX Manager cluster for the management domain. |
Provides high availability of the user interface and API of NSX Manager. |
|
VCF-MGMT-NSX-CFG-004 |
Apply VM-VM anti-affinity rules in vSphere Distributed Resource Scheduler (vSphere DRS) to the NSX Manager appliances. |
Keeps the NSX Manager appliances running on different ESXi hosts for high availability. |
You must allocate at least four physical hosts so that the three NSX Manager appliances continue running if an ESXi host failure occurs. |
VCF-MGMT-NSX-CFG-005 |
In vSphere HA, set the restart priority policy for each NSX Manager appliance to high. |
|
If the restart priority for another management appliance is set to highest, the connectivity delay for management appliances will be longer. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-CFG-006 |
Add the NSX Manager appliances to the virtual machine group for the first availability zone. |
Ensures that, by default, the NSX Manager appliances are powered on on a host in the primary availability zone. |
None. |
NSX Manager Network Design
Decision ID |
Design Decision |
Design Justification |
Decision Implication |
---|---|---|---|
VCF-MGMT-NSX-NET-001 |
Place the appliances of the NSX Manager cluster on the management VLAN network in the management domain. |
|
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-NET-002 |
Allocate a statically assigned IP address and host name to the nodes of the NSX Manager cluster. |
Ensures stability across the private cloud, makes it simpler to maintain and track, and to implement a DNS configuration. |
Requires precise IP address management. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-NET-003 |
Configure forward and reverse DNS records for the nodes of the NSX Manager cluster for the management domain. |
The NSX Manager nodes and VIP address are accessible by using fully qualified domain names instead of by using IP addresses only. |
You must provide DNS records for the NSX Manager nodes for the management domain in each VMware Cloud Foundation instance. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-NET-004 |
Configure NTP on each NSX Manager appliance. |
NSX Manager depends on time synchronization. |
None. |
NSX Global Manager Deployment Specification
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-FED-CFG-001 |
Deploy three NSX Global Manager nodes for the management domain in the default cluster in the domain for configuring and managing the network services for the management domain components. |
Some management components must be placed on isolated virtual networks, using load balancing, logical switching, dynamic routing, and logical firewalls services. |
|
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-FED-CFG-002 |
Deploy each node in the NSX Global Manager cluster for the management domain as a medium-size appliance or larger. |
A medium-size appliance is sufficient for providing network services to the management components of the private cloud. |
If you extend the management domain, increasing the size of the NSX Manager appliances might be required. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-FED-CFG-003 |
Create a virtual IP (VIP) address for the NSX Global Manager cluster for the management domain. |
Provides high availability of the user interface and API of NSX Global Manager. |
|
VCF-MGMT-NSX-FED-CFG-004 |
Apply VM-VM anti-affinity rules in vSphere DRS to the NSX Global Manager appliances. |
Keeps the NSX Global Manager appliances running on different ESXi hosts for high availability. |
You must allocate at least four physical hosts so that the three NSX Manager appliances continue running if an ESXi host failure occurs. |
VCF-MGMT-NSX-FED-CFG-005 |
In vSphere HA, set the restart priority policy for each NSX Global Manager appliance to medium. |
|
|
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-FED-CFG-006 |
Add the NSX Global Manager appliances to the virtual machine group for the first availability zone. |
Ensures that, by default, the NSX Global Manager appliances are powered on on a host in the primary availability zone. |
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-FED-CFG-007 |
Deploy an additional NSX Global Manager Cluster in the second VMware Cloud Foundation instance. |
Enables recoverability of NSX Global Manager in the second VMware Cloud Foundation instance if a failure in the first VMware Cloud Foundation instance occurs. |
Requires additional NSX Global Manager nodes in the second VMware Cloud Foundation instance. |
VCF-MGMT-NSX-FED-CFG-008 |
Set the NSX Global Manager cluster in the second VMware Cloud Foundation instance as standby for the management domain. |
Enables recoverability of NSX Global Manager in the second VMware Cloud Foundation instance if a failure in the first VMware Cloud Foundation instance occurs. |
None. |
NSX Global Manager Network Design
Decision ID |
Design Decision |
Design Justification |
Decision Implication |
---|---|---|---|
VCF-MGMT-NSX-FED-NET-001 |
Place the appliances of the NSX Global Manager cluster on the management VLAN network in each VMware Cloud Foundation instance. |
Reduces the number of required VLANs because a single VLAN can be allocated to both vCenter Server and NSX-T Data Center. |
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-FED-NET-002 |
Allocate a statically assigned IP address and host name to the nodes of the NSX Global Manager cluster. |
Ensures stability across the private cloud, makes it simpler to maintain and track, and to implement a DNS configuration. |
Requires precise IP address management. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-FED-NET-003 |
Configure forward and reverse DNS records for the nodes of the NSX Global Manager cluster for the management domain. |
The NSX Global Manager nodes and VIP address are accessible by using fully qualified domain names instead of by using IP addresses only. |
You must provide DNS records for the NSX Global Manager nodes for the management domain in each VMware Cloud Foundation instances. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-FED-NET-004 |
Configure NTP on each NSX Global Manager appliances. |
NSX Global Manager depends on time synchronization across all private cloud components. |
None. |
NSX Edge Deployment Specification
Design ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-EDGE-CFG-001 |
Use large-size NSX Edge virtual appliances. |
The large -size appliance provides the performance characteristics for supporting the SDDC management components in the management domain. |
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implications |
---|---|---|---|
VCF-MGMT-NSX-EDGE-CFG-002 |
Deploy the NSX Edge virtual appliances in the default management vSphere cluster, sharing the cluster between the management workloads and the edge appliances. |
|
None. |
VCF-MGMT-NSX-EDGE-CFG-003 |
Deploy two NSX Edge appliances in an edge cluster in the default vSphere cluster in the management domain. |
Creates the NSX Edge cluster for satisfying the requirements for availability and scale. |
None. |
VCF-MGMT-NSX-EDGE-CFG-004 |
Apply VM-VM anti-affinity rules for vSphere DRS to the virtual machines of the NSX Edge cluster. |
Keeps the NSX Edge nodes running on different ESXi hosts for high availability. |
None. |
VCF-MGMT-NSX-EDGE-CFG-005 |
In vSphere HA, set the restart priority policy for each NSX Edge appliance to high. |
|
If the restart priority for another management appliance is set to highest, the connectivity delays for management appliances will be longer . |
VCF-MGMT-NSX-EDGE-CFG-006 |
Configure all edge nodes as transport nodes. |
Enables the participation of edge nodes in the overlay network for delivery of services to the SDDC management components such as routing and load balancing. |
None. |
VCF-MGMT-NSX-EDGE-CFG-007 |
Create an NSX Edge cluster with the default Bidirectional Forwarding Detection (BFD) configuration between the NSX Edge nodes in the cluster. |
|
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-EDGE-CFG-008 |
Add the NSX Edge appliances to the virtual machine group for the first availability zone. |
Ensures that, by default, the NSX Edge appliances are powered on upon a host in the primary availability zone. |
None. |
NSX Edge Network Design
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-EDGE-NET-001 |
Connect the management interface |
Provides connection to the NSX Manager cluster. |
None. |
VCF-MGMT-NSX-EDGE-NET-002 |
|
|
None. |
VCF-MGMT-NSX-EDGE-NET-003 |
Use a single N-VDS in the NSX Edge nodes. |
|
None. |
VCF-MGMT-NSX-EDGE-NET-004 |
Use a dedicated VLAN for edge overlay that is different from the host overlay VLAN.
|
A dedicated edge overlay network enables edge mobility in support of advanced deployments such as multiple availability zones or multi-rack clusters. |
|
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-EDGE-NET-005 |
Allocate a separate VLAN for edge RTEP overlay that is different from the edge overlay VLAN. |
|
You must allocate another VLAN in the data center infrastructure. |
NSX Edge Uplink Policy Design
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-EDGE-NET-006 |
Create one uplink profile for the edge nodes with three teaming policies.
|
|
None. |
Life Cycle Management Design
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-LCM-001 |
Use SDDC Manager to perform the life cycle management of NSX Manager and related components in the management domain. |
Because the deployment scope of SDDC Manager covers the full SDDC stack, SDDC Manager performs patching, update, or upgrade of the management domain as a single process. |
The operations team must understand and be aware of the impact of a patch, update, or upgrade operation by using SDDC Manager. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-LCM-FED-001 |
Use the upgrade coordinator in NSX-T Data Center to perform life cycle management on the NSX Global Manager appliances. |
The version of SDDC Manager in this design is not currently capable of life cycle operations (patching, update, or upgrade) for NSX Global Manager. |
You must always align the version of the NSX Global Manager nodes with the rest of the SDDC stack in VMware Cloud Foundation. You must explicitly plan upgrades of the NSX Global Manager nodes. An upgrade of the NSX Global Manager nodes might require a cascading upgrade of the NSX Local Manager nodes and underlying SDDC Manager infrastructure prior to the upgrade of the NSX-T Global Manager nodes. |
VCF-MGMT-NSX-LCM-FED-002 |
Establish an operations practice to ensure that prior to the upgrade of any workload domain, the impact of any version upgrades is evaluated regarding the need to upgrade NSX Global Manager.
|
The versions of NSX Global Manager and NSX Local Manager nodes must be compatible with each other. Because the version of SDDC Manager in this design does not provide of life cycle operations (patching, update, or upgrade) for the NSX Global Manager nodes, upgrade to an unsupported version cannot be prevented. |
The administrator must establish and follow an operational practice by using a runbook or automated process to ensure a fully supported and compliant bill of materials prior to any upgrade operation. |
VCF-MGMT-NSX-LCM-FED-003 |
Establish an operations practice to ensure that prior to the upgrade of the NSX Global Manager, the impact of any version change is evaluated against the existing NSX Local Manager nodes and workload domains. |
The versions of NSX Global Manager and NSX Local Manager nodes must be compatible with each other. Because the version of SDDC Manager in this design does not provide of life cycle operations (patching, update, or upgrade) for the NSX Global Manager nodes, upgrade to an unsupported version cannot be prevented. |
The administrator must establish and follow an operational practice by using a runbook or automated process to ensure a fully supported and compliant bill of materials prior to any upgrade operation. |
Routing Design for a Single VMware Cloud Foundation Instance
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-001 |
Deploy an active-active Tier-0 gateway. |
Supports ECMP north-south routing on all Edge nodes in the NSX Edge cluster. |
Active-active Tier-0 gateways cannot provide stateful services such as NAT. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-002 |
To enable ECMP between the Tier-0 gateway and the Layer 3 devices (ToR switches or upstream devices), create two VLANs . The ToR switches or upstream Layer 3 devices have an SVI on one of the two VLANS and each Edge node in the cluster has an interface on each VLAN. |
Supports multiple equal-cost routes on the Tier-0 gateway and provides more resiliency and better bandwidth use in the network. |
Additional VLANs are required. |
VCF-MGMT-NSX-SDN-003 |
Assign a named teaming policy to the VLAN segments to the Layer 3 device pair. |
Pins the VLAN traffic on each segment to its target edge node interface. From there the traffic is directed to the host physical NIC that is connected to the target top of rack switch. |
None. |
VCF-MGMT-NSX-SDN-004 |
Create a VLAN transport zone for edge uplink traffic. |
Enables the configuration of VLAN segments on the N-VDS in the edge nodes. |
Additional VLAN transport zones are required if the edge nodes are not connected to the same top of rack switch pair.
|
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-005 |
Use BGP as the dynamic routing protocol. |
|
In environments where BGP cannot be used, you must configure and manage static routes. |
VCF-MGMT-NSX-SDN-006 |
Configure the BGP Keep Alive Timer to 4 and Hold Down Timer to 12 or lower between the top of tack switches and the Tier-0 gateway. These timers must be aligned with the data center fabric design of your organization. |
Provides a balance between failure detection between the top of rack switches and the Tier-0 gateway, and overburdening the top of rack switches with keep-alive traffic. |
By using longer timers to detect if a router is not responding, the data about such a router remains in the routing table longer. As a result, the active router continues to send traffic to a router that is down. |
VCF-MGMT-NSX-SDN-007 |
Do not enable Graceful Restart between BGP neighbors. |
Avoids loss of traffic. On the Tier-0 gateway, BGP peers from all the gateways are always active. On a failover, the Graceful Restart capability increases the time a remote neighbor takes to select an alternate Tier-0 gateway. As a result, BFD-based convergence is delayed. |
None. |
VCF-MGMT-NSX-SDN-008 |
Enable helper mode for Graceful Restart mode between BGP neighbors. |
Avoids loss of traffic. During a router restart, helper mode works with the graceful restart capability of upstream routers to maintain the forwarding table which in turn will forward packets to a down neighbor even after the BGP timers have expired causing loss of traffic. |
None. |
VCF-MGMT-NSX-SDN-009 |
Enable Inter-SR iBGP routing. |
In the event that an edge node has all of its northbound eBGP sessions down, north-south traffic will continue to flow by routing traffic to a different edge node. |
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-010 |
Deploy a Tier-1 gateway and connect it to the Tier-0 gateway. |
Creates a two-tier routing architecture. Abstracts the NSX logical components which interact with the physical data center from the logical components which provide SDN services. |
A Tier-1 gateway can only be connected to a single Tier-0 gateway. In cases where multiple Tier-0 gateways are required, you must create multiple Tier-1 gateways. |
VCF-MGMT-NSX-SDN-011 |
Deploy a Tier-1 gateway to the NSX Edge cluster. |
Enables stateful services, such as load balancers and NAT, for SDDC management components. Because a Tier-1 gateway always works in active-standby mode, the gateway supports stateful services. |
None. |
VCF-MGMT-NSX-SDN-012 |
Deploy a Tier-1 gateway in non-preemptive failover mode. |
Ensures that after a failed NSX Edge transport node is back online, it does not take over the gateway services thus causing a short service outage. |
None. |
VCF-MGMT-NSX-SDN-013 |
Enable standby relocation of the Tier-1 gateway. |
Ensures that if an edge failure occurs, a standby Tier-1 gateway is created on another edge node. |
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-014 |
Extend the uplink VLANs to the top of rack switches so that the VLANs are stretched between both availability zones. |
Because the NSX Edge nodes will fail over between the availability zones, ensures uplink connectivity to the top of rack switches in both availability zones regardless of the zone the NSX Edge nodes are presently in. |
You must configure a stretched Layer 2 network between the availability zones by using physical network infrastructure. |
VCF-MGMT-NSX-SDN-015 |
Provide this SVI configuration on the top of the rack switches or upstream Layer 3 devices.
|
Enables the communication of the NSX Edge nodes to the top of rack switches in both availability zones over the same uplink VLANs. |
You must configure a stretched Layer 2 network between the availability zones by using the physical network infrastructure. |
VCF-MGMT-NSX-SDN-016 |
Provide this VLAN configuration.
|
Supports multiple equal-cost routes on the Tier-0 gateway, and provides more resiliency and better bandwidth use in the network. |
|
VCF-MGMT-NSX-SDN-017 |
Create an IP prefix list that permits access to route advertisement by |
Used in a route map to prepend a path to one or more autonomous system (AS-path prepend) for BGP neighbors in the second availability zone. |
You must manually create an IP prefix list that is identical to the default one. |
VCF-MGMT-NSX-SDN-018 |
Create a route map-out that contains the custom IP prefix list and an AS-path prepend value set to the Tier-0 local AS added twice. |
|
You must manually create the route map. The two NSX Edge nodes will route north-south traffic through the second availability zone only if the connection to their BGP neighbors in the first availability zone is lost, for example, if a failure of the top of the rack switch pair or in the availability zone occurs. |
VCF-MGMT-NSX-SDN-019 |
Create an IP prefix list that permits access to route advertisement by network |
Used in a route map to configure local-reference on learned default-route for BGP neighbors in the second availability zone. |
You must manually create an IP prefix list that is identical to the default one. |
VCF-MGMT-NSX-SDN-020 |
Apply a route map-in that contains the IP prefix list for the default route |
|
You must manually create the route map. The two NSX Edge nodes will route north-south traffic through the second availability zone only if the connection to their BGP neighbors in the first availability zone is lost, for example, if a failure of the top of the rack switch pair or in the availability zone occurs. |
VCF-MGMT-NSX-SDN-021 |
Configure the neighbors of the second availability zone to use the route maps as In and Out filters respectively. |
Makes the path in and out of the second availability zone less preferred because the AS path is longer. As a result, all traffic passes through the first zone. |
The two NSX Edge nodes will route north-south traffic through the second availability zone only if the connection to their BGP neighbors in the first availability zone is lost, for example, if a failure of the top of the rack switch pair or in the availability zone occurs. |
Routing Design for Multiple VMware Cloud Foundation Instances
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-FED-001 |
Extend the management domain active-active Tier-0 gateway to the second VMware Cloud Foundation instance. |
|
Active-active Tier-0 gateways cannot provide stateful services such as NAT. |
VCF-MGMT-NSX-SDN-FED-002 |
Set the Tier-0 gateway as primary for all VMware Cloud Foundation instances. |
|
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-FED-003 |
From the global Tier-0 gateway, establish BGP neighbor peering to the ToR switches connected to the second VMware Cloud Foundation instance. |
|
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-FED-004 |
Use Tier-1 gateways to control the span of networks and ingress and egress traffic in the VMware Cloud Foundation instances. |
Enables a mixture of network spans (isolated to a VMware Cloud Foundation instance or spanning multiple instances) without requiring additional Tier-0 gateways and hence edge nodes. |
To control location span, a Tier-1 gateway must be assigned to an edge cluster and hence has the Tier-1 SR component. East-west traffic between Tier-1 gateways with SRs need to physically traverse an edge node. |
VCF-MGMT-NSX-SDN-FED-005 |
Use a global cross-instance Tier-1 gateway and connect it to the Tier-0 gateway for cross-instance networking. |
|
None. |
VCF-MGMT-NSX-SDN-FED-006 |
Assign the NSX Edge cluster in each VMware Cloud Foundation instance to the global cross-instance Tier-1 gateway. Set the first VMware Cloud Foundation instance as primary and the second instance as secondary. |
|
You must manually fail over and fail back the cross-instance network from the standby NSX Global Manager. |
VCF-MGMT-NSX-SDN-FED-007 |
Allocate a Tier-1 gateway in each instance for instance-specific networks and connect it to the cross-instance Tier-0 gateway. |
|
None. |
VCF-MGMT-NSX-SDN-FED-008 |
Assign the NSX Edge cluster in each VMware Cloud Foundation instance to the instance-specific Tier-1 gateway for that VMware Cloud Foundation instance. |
|
You can use the service router that is created for the Tier-1 gateway for networking services. However, such configuration is not required for network connectivity. |
VCF-MGMT-NSX-SDN-FED-009 |
Set each local-instance Tier-1 gateway only as primary in the home instance. Avoid setting the gateway as secondary in the other instances. |
Prevents the need to use BGP attributes in primary and secondary instances to influence the instance ingress-egress preference. |
None. |
Load Balancing Design
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-022 |
Deploy a standalone Tier-1 gateway to support advanced stateful services such as load balancing for other management components. |
Provides independence between north-south Tier-1 gateways to support advanced deployment scenarios. |
You must add a separate Tier-1 gateway. |
VCF-MGMT-NSX-SDN-023 |
Connect the standalone Tier-1 gateway to cross-instance NSX segments. |
Provides load balancing to applications connected to the cross-instance network. For information on the NSX segment configuration for vRealize Suite, see VMware Validated Design for vRealize Lifecycle and Access Management. |
You must connect the gateway to each network that requires load balancing. |
VCF-MGMT-NSX-SDN-024 |
Configure the standalone Tier-1 gateway with static routes to the gateways of the networks it is connected to. |
Because the Tier-1 gateway is standalone, it does not autoconfigure its routes. |
You must configure the gateway for each network that requires load balancing. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-FED-010 |
Deploy a standalone Tier-1 gateway in the second VMware Cloud Foundation instance. |
Provides a cold-standby non-global service router instance for the second VMware Cloud Foundation instance to support services on the cross-instance network which require advanced services not currently supported as NSX-T global objects.
|
|
VCF-MGMT-NSX-SDN-FED-011 |
Connect the standalone Tier-1 gateway in the second VMware Cloud Foundation instance to the cross-instance NSX segment. |
Provides load balancing to applications connected to the cross-instance network in the second VMware Cloud Foundation instance. |
You must connect the gateway to each network that requires load balancing. |
VCF-MGMT-NSX-SDN-FED-012 |
Configure the standalone Tier-1 gateway in the second VMware Cloud Foundation instance with static routes to the gateways of the networks it is connected to. |
Because the Tier-1 gateway is standalone, it does not autoconfigure its routes. |
You must configure the gateway for each network that requires load balancing. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-FED-013 |
Establish an operational practice to reproduce any changes made on the network service configuration on the load balancer instance in the first VMware Cloud Foundation instance to the disconnected failover load balancer in the second instance. |
Keeps the network service in the failover load balancer instance ready for activation if a failure in the first VMware Cloud Foundation instance occurs. Because network services are not supported as global objects, you must configure them manually in each VMware Cloud Foundation instance. The load balancer service in one instance must be connected and active, while the service in the other instance must be disconnected and inactive. |
|
VCF-MGMT-NSX-SDN-FED-014 |
Establish an operational practice to ensure that during failure of the first VMware Cloud Foundation instance, a service is manually brought online in the second VMware Cloud Foundation instance. |
Provides support for the management applications that are failed over to the second VMware Cloud Foundation instance. Because network services are not supported as global objects, you must configure them manually in each VMware Cloud Foundation instance. The load balancer service in one instance must be connected and active, while the service in the other instance must be disconnected and inactive. |
The administrator must establish and follow an operational practice by using a runbook or automated process to ensure the correct services are brought online. |
VCF-MGMT-NSX-SDN-FED-015 |
Establish an operational practice to manually bring offline the load balancer services in the first VMware Cloud Foundation instance after failover to the second instance and during the recovery of the first instance. |
During the recovery of the first VMware Cloud Foundation instance, the service might come back online and cause a potential conflict with the active services still running on the first instance. Because network services are not supported as global objects, you must manually fail over services in the recovery VMware Cloud Foundation instance. The load balancer service in one instance must be connected and active, while the service in the other instance must be disconnected and inactive. |
|
Overlay Design
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-025 |
Enable all ESXi hosts in the management domain as transport nodes in NSX-T Data Center. |
Enables distributed routing, logical segments, and distributed firewall. |
None. |
VCF-MGMT-NSX-SDN-026 |
Configure each ESXi host as a transport node without using transport node profiles. |
Enables the participation of ESXi hosts and the virtual machines on them in NSX overlay and VLAN networks. Transport node profiles can only be applied at the cluster level. Because in an environment with multiple availability zones each availability zone is connected to a different set of VLANs, you cannot use a transport node profile. |
You must configure each transport node with an uplink profile individually. |
Decision ID | Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-SDN-027 | Use DHCP to assign IP addresses to the host TEP interfaces. |
Required for deployments where a cluster spans Layer 3 network domains such as multiple availability zones and management clusters that span Layer 3 domains. |
DHCP server is required for the host overlay VLANs. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-028 |
Use the vSphere Distributed Switch for the default cluster in the management domain that is enabled for NSX-T Data Center. |
To use features such as distributed routing, management workloads must be connected to NSX segments. |
Management occurs jointly from the vSphere Client to NSX Manager. However, you must perform all network monitoring in the NSX Manager user interface or another solution. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-029 |
To provide virtualized network capabilities to management workloads, use overlay networks with NSX Edge nodes and distributed routing. |
|
Requires configuring transport networks with an MTU size of at least 1,600 bytes. |
Decision ID |
Design Decision |
Design Implication |
Design Justification |
---|---|---|---|
VCF-MGMT-NSX-SDN-030 |
Create a single overlay transport zone for all overlay traffic across the management domain and NSX Edge nodes. |
|
None. |
VCF-MGMT-NSX-SDN-031 |
Create a single VLAN transport zone for uplink VLAN traffic that is applied only to NSX Edge nodes. |
Ensures that uplink VLAN segments are configured on the NSX Edge transport nodes. |
If VLAN segments are needed on hosts, you must create another VLAN transport zone for the host transport nodes only. |
Decision ID |
Design Decision |
Decision Justification |
Decision Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-032 |
Create an uplink profile with a load balance source teaming policy with two active uplinks for ESXi hosts. |
For increased resiliency and performance, supports the concurrent use of both physical NICs on the ESXi hosts that are configured as transport nodes. |
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implications |
---|---|---|---|
VCF-MGMT-NSX-SDN-033 |
Use hierarchical two-tier replication on all overlay segments. |
Hierarchical two-tier replication is more efficient because it reduced the number of ESXi hosts the source ESXi host must replicate traffic to. |
None. |
NSX Segments Design
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-AVN-001 |
Create one cross-instance NSX segment for the components of a vRealize Suite application or another solution that requires mobility between VMware Cloud Foundation instances. |
Prepares the environment for the deployment of solutions on top of VMware Cloud Foundation, such as the vRealize Suite, without a complex physical network configuration. The components of the vRealize Suite application must be easily portable between VMware Cloud Foundation instances without requiring reconfiguration. |
Each NSX segment requires a unique IP address space. |
VCF-MGMT-NSX-SDN-AVN-002 |
Create one or more local-instance NSX segments for the components of a vRealize Suite application or or another solution that are assigned to a specific VMware Cloud Foundation instance. |
Prepares the environment for the deployment of solutions on top of VMware Cloud Foundation, such as the vRealize Suite, without a complex physical network configuration. |
Each NSX segment requires a unique IP address space. |
VCF-MGMT-NSX-SDN-AVN-003 | Use overlay-backed NSX segments. |
|
Using overlay-backed NSX segments requires routing, eBGP recommended, between the data center fabric and edge nodes. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SDN-AVN-004 |
Extend the cross-instance NSX segment to the second VMware Cloud Foundation instance. |
Enables workload mobility without a complex physical network configuration. The components of a vRealize Suite application must be easily portable between VMware Cloud Foundation instances without requiring reconfiguration. |
Each NSX segment requires a unique IP address space. |
VCF-MGMT-NSX-SDN-AVN-005 |
In each VMware Cloud Foundation instance, create additional local-instance NSX segments. |
Enables workload mobility within a VMware Cloud Foundation instance without complex physical network configuration. Each VMware Cloud Foundation instance should have network segments to support workloads which are isolated to that VMware Cloud Foundation instance. |
Each NSX segment requires a unique IP address space. |
VCF-MGMT-NSX-SDN-AVN-006 |
In each VMware Cloud Foundation instance, connect or migrate the local-instance NSX segments to the corresponding local-instance Tier-1 gateway. |
Configures local-instance NSX segments at required sites only. |
Requires an individual Tier-1 gateway for local-instance segments. |
Information Security and Access Control Design
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SEC-001 |
Replace the default self-signed certificate of the NSX Manager instance for the management domain with a certificate that is signed by a third-party certificate authority. |
Ensures that the communication between administrators and NSX Manager is encrypted by using a trusted certificate. |
Replacing the default certificates with trusted CA-signed certificates from a certificate authority might increase the deployment preparation time because you must generate and submit certificates requests. |
VCF-MGMT-NSX-SEC-002 |
Use a SHA-2 algorithm or stronger when signing certificates. |
The SHA-1 algorithm is considered less secure and has been deprecated. |
Not all certificate authorities support SHA-2. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NSX-SEC-FED-001 |
Replace the default self-signed certificate of the NSX Global Manager instance for the management domain with a certificate that is signed by a third- party certificate authority. |
Ensures that the communication between administrators and the NSX Global Manager instance is encrypted by using a trusted certificate. |
Replacing the default certificates with trusted CA- signed certificates from a certificate authority might increase the deployment preparation time because you must generate and submit certificates requests. |
VCF-MGMT-NSX-SEC-FED-002 |
Establish an operational practice to capture and update the thumbprint of the NSX Local Manager certificate on NSX Global Manager every time the certificate is updated by using SDDC Manager. |
Ensures secured connectivity between the NSX Manager instances. Each certificate has its own unique thumbprint. NSX Global Manager stores the unique thumbprint of the NSX Local Manager instances for enhanced security. If an authentication failure between NSX Global Manager and NSX Local Manager occurs, objects that are created from NSX Global Manager will not be propagated on to the SDN. |
The administrator must establish and follow an operational practice by using a runbook or automated process to ensure that the thumbprint up-to-date. |