Use this design decision list for reference related to the configuration of NSX-T Data Center in an environment with a single or multiple VMware Cloud Foundation instances. The design also considers if an instance contains a single or multiple availability zones.
The NSX-T Data Center design covers the following areas:
Physical network infrastructure
Deployment of and secure access to the NSX-T Data Center nodes
Dynamic routing configuration and load balancing
NSX segment organization
NSX Federation and Tier-0 gateway configuration for north-south routing
After you set up the physical network infrastructure, the configuration tasks for most design decisions are automated in VMware Cloud Foundation. You must perform the configuration manually only for a limited number of decisions as noted in the design implication.
For full design details, see NSX-T Data Center Design for a Virtual Infrastructure Workload Domain.
Physical Network Infrastructure Design
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-PHY-001 |
Use two ToR switches for each rack. |
Supports the use of two 10 GbE (25 GbE or greater recommended) links to each server and provides redundancy and reduces the overall design complexity. |
Requires two ToR switches per rack which can increase costs. |
VCF-WLD-NSX-PHY-002 |
Implement the following physical network architecture:
|
|
|
VCF-WLD-NSX-PHY-003 |
Do not use EtherChannel (LAG, LACP, or vPC) configuration for ESXi host uplinks |
|
None. |
VCF-WLD-NSX-PHY-004 |
Use a physical network that is configured for BGP routing adjacency |
|
Requires BGP configuration in the physical network. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-PHY-005 |
Assign persistent IP configurations to each management component in the SDDC with the exception for NSX tunnel endpoints (TEPs) that use dynamic IP allocation. |
Ensures that endpoints have a persistent management IP address. In VMware Cloud Foundation, you assign storage (vSAN and NFS) and vSphere vMotion IP configurations by using user-defined network pools. |
Requires precise IP address management. |
VCF-WLD-NSX-PHY-006 |
Set the lease duration for the DHCP scope for the host overlay network to at least 7 days. |
IP addresses of the host overlay VMkernel ports are assigned by using a DHCP server.
|
Requires configuration and management of a DHCP server. |
VCF-WLD-NSX-PHY-007 |
Use VLANs to separate physical network functions. |
|
Requires uniform configuration and presentation on all the trunks that are made available to the ESXi hosts. |
Decision ID |
Design Decision |
Decision Justification |
Decision Implication |
---|---|---|---|
VCF-WLD-NSX-PHY-008 |
Set the MTU size to at least 1700 bytes (recommended 9,000 bytes for jumbo frames) on the physical switch ports, vSphere Distributed Switches, vSphere Distributed Switch port groups, and N-VDS switches that support the following traffic types:
|
|
When adjusting the MTU size, you must also configure the entire network path (VMkernel ports, virtual switches, physical switches, and routers) to support the same MTU size. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-PHY-009 |
Set the MTU size to at least 1700 bytes (recommended 9000 bytes for jumbo frames) on physical inter- availability zone networking components which are part of the networking path between availability zones for the following traffic types.
|
|
When adjusting the MTU size, you must also configure the entire network path (VMkernel ports, virtual switches, physical switches, and routers) to support the same MTU size. In multi-AZ deployments, the MTU must be configured on the entire network path between AZs. |
VCF-WLD-NSX-PHY-010 |
Configure VRRP, HSRP, or another Layer 3 gateway availability method. |
Ensures that the VLANs that are stretched between availability zones are connected to a highly- available gateway if a failure of an availability zone occurs. Otherwise, a failure in the Layer 3 gateway will cause disruption in traffic in the SDN setup. |
Requires configuration of a high availability technology for the Layer 3 gateways in the data center. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-PHY-011 |
Set the MTU size to at least 1,500 bytes (1,700 bytes preferred, 9,000 bytes recommended for jumbo frames) on the physical inter-instance network components which are part of the network path between availability zones for edge RTEP traffic. |
|
When adjusting the MTU packet size, you must also configure the entire network path, that is, virtual interfaces, virtual switches, physical switches, and routers to support the same MTU packet size. |
VCF-WLD-NSX-PHY-012 |
Provide a connection between VMware Cloud Foundation instances that is capable of routing between each NSX Manager cluster. |
Configuring NSX Federation requires connectivity between NSX Global Managers, NSX Local Managers, and NSX Edge clusters. |
Requires unique routable IP addresses for each region. |
VCF-WLD-NSX-PHY-013 |
Ensure that latency between regions is less than 150 ms |
A latency below 150 ms is required for the following features.
|
None. |
VCF-WLD-NSX-PHY-014 |
Provide BGP routing between all VMware Cloud Foundation instances. |
Automated failover of networks requires a dynamic routing protocol, such as BGP. |
None. |
NSX Manager Deployment Specification
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-CFG-001 |
Deploy three NSX Manager nodes for the VI workload domain in the first cluster in the management domain for configuring and managing the network services for customer workloads. |
Customer workloads can be placed on isolated virtual networks, using load balancing, logical switching, dynamic routing, and logical firewalls services. |
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-CFG-002 |
Deploy each node in the NSX Manager cluster for the workload domain as a large-size appliance. |
A large-size appliance is sufficient for providing network services to the SDDC tenant workloads. |
You must provide enough compute and storage resources in the management domain to support this NSX Manager cluster. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-CFG-003 |
Create a virtual IP (VIP) address for the NSX Manager cluster for the VI workload domain. |
Provides high availability of the user interface and API of NSX Manager. |
|
VCF-WLD-NSX-CFG-004 |
Apply VM-VM anti-affinity rules in vSphere Distributed Resource Scheduler (vSphere DRS) to the NSX Manager appliances. |
Keeps the NSX Manager appliances running on different ESXi hosts for high availability. |
|
VCF-WLD-NSX-CFG-005 |
In vSphere HA, set the restart priority policy for each NSX Manager appliance to high. |
|
If the restart priority for another management appliance is set to highest, the connectivity delays for services will be longer. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-CFG-006 |
Add the NSX Manager appliances to the virtual machine group for the first availability zone. |
Ensures that, by default, the NSX Manager appliances are powered on within the primary availability zone hosts group. |
None. |
NSX Manager Network Design
Decision ID |
Design Decision |
Design Justification |
Decision Implication |
---|---|---|---|
VCF-WLD-NSX-PHY-007 |
Place the appliances of the NSX Manager cluster on the management VLAN in the management domain. |
|
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-NET-001 |
Allocate a statically assigned IP address and host name to the nodes of the NSX Manager cluster. |
Ensures stability across the SDDC, makes it simpler to maintain and track, and to implement a DNS configuration. |
Requires precise IP address management. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-NET-002 |
Configure forward and reverse DNS records for the nodes of the NSX Manager cluster for the VI workload domain. |
The NSX Manager nodes and VIP address are accessible by using fully qualified domain names instead of by using IP addresses only. |
You must provide DNS records for the NSX Manager nodes for the VI workload domain in each VMware Cloud Foundation instance. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-NET-003 |
Configure NTP on each NSX Manager appliance. |
NSX Manager depends on time synchronization. |
None. |
NSX Global Manager Deployment Specification
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-FED-CFG-001 |
Deploy three NSX Global Manager nodes for the VI workload domain in the default management cluster. |
Some customer workloads must be placed on isolated virtual networks, using load balancing, logical switching, dynamic routing, and logical firewalls services. |
|
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-FED-CFG-002 |
Deploy each node in the NSX-T Global Manager cluster for the VI workload domain as a large-size appliance. |
A large-size appliance is sufficient for providing network services to the SDDC customer workloads. |
You must provide enough compute and storage resources in the management domain to support this NSX Global Manager cluster. If you extend the workload domain, increasing the size of the NSX Global Manager appliances might be required. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-FED-CFG-003 |
Create a virtual IP (VIP) address for the NSX-T Global Manager cluster for the VI workload domain. |
Provides high availability of the user interface and API of NSX Global Manager. |
|
VCF-WLD-NSX-FED-CFG-004 |
Apply VM-VM anti-affinity rules in vSphere Distributed Resource Scheduler (vSphere DRS) to the NSX Global Manager appliances. |
Keeps the NSX Global Manager appliances running on different ESXi hosts for high availability. |
|
VCF-WLD-NSX-FED-CFG-005 |
In vSphere HA, set the restart priority policy for each NSX Global Manager appliance to medium. |
|
|
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-FED-CFG-006 |
Add the NSX Global Manager appliances to the virtual machine group for the first availability zone. |
Ensures that, by default, the NSX Global Manager appliances are powered on on a host in the first availability zone. |
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-FED-CFG-007 |
Deploy an additional NSX Global Manager cluster in the second VMware Cloud Foundation instance. |
Enables recoverablity of NSX Global Manager in a second VMware Cloud Foundation instance if a failure in the first instance occurs. |
Requires additional NSX Global Manager nodes in the VMware Cloud Foundation instance. |
VCF-WLD-NSX-FED-CFG-008 |
Set the NSX Global Manager cluster in the second VMware Cloud Foundation instance as standby for the VI workload domain. |
Enables recoverablity of the NSX Global Manager in a second VMware Cloud Foundation instance if a failure in the first instance occurs. |
None. |
NSX Global Manager Network Design
Decision ID |
Design Decision |
Design Justification |
Decision Implication |
---|---|---|---|
VCF-WLD-NSX-FED-NET-001 |
Place the appliances of the NSX Global Manager cluster on the management VLAN network in the default management cluster in the management domain. |
Reduces the number of required VLANs because a single VLAN can be allocated to both vCenter Server and NSX-T Data Center. |
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-FED-NET-002 |
Allocate a statically assigned IP address and host name to the nodes of the NSX Global Manager cluster. |
Ensures stability across the SDDC, makes it simpler to maintain and track, and to implement a DNS configuration. |
Requires precise IP address management. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-FED-NET-003 |
Configure forward and reverse DNS records for the nodes of the NSX Global Manager cluster for the VI workload domain, assigning the record to the child domain in the region. |
The NSX Global Manager nodes and VIP address are accessible by using fully qualified domain names instead of by using IP addresses only. |
You must provide DNS records for the NSX Global Manager nodes for the VI workload domain in VMware Cloud Foundation instance. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-FED-NET-004 |
Configure NTP on each NSX Global Manager appliance. |
NSX Global Manager depends on time synchronization across all SDDC components. |
None. |
NSX Edge Deployment Specification
Design ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-EDGE-CFG-001 |
Use large-size NSX Edge virtual appliances. |
The large-size appliance provides the required performance characteristics for most tenant workloads. |
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implications |
---|---|---|---|
VCF-WLD-NSX-EDGE-CFG-002 |
Deploy the NSX Edge virtual appliances in a shared edge and workload cluster in the VI workload domain.
|
|
NSX Edge appliances are co-located with customer workloads. Ensure that customers workloads do not prevent NSX Edge nodes from handling network traffic. |
VCF-WLD-NSX-EDGE-CFG-003 |
Deploy two NSX Edge appliances in the edge cluster in the shared edge and workload cluster. |
Creates the edge cluster for satisfying the requirements for availability and scale. |
None. |
VCF-WLD-NSX-EDGE-CFG-004 |
Create a resource pool for the NSX Edge appliances in the root of the shared edge and workload cluster object. Create a resource pool in the root of the shared edge and workload cluster object for customer workloads. |
Guarantees that the edge cluster receives sufficient compute resources during times of contention. |
|
VCF-WLD-NSX-EDGE-CFG-005 |
Configure the edge resource pool with a 64-GB memory reservation and normal CPU share value. |
Guarantees that the edge cluster receives sufficient memory resources during times of contention. |
Edge appliances might not be able to use their allocated CPU capacity during times of contention. |
VCF-WLD-NSX-EDGE-CFG-006 |
Apply VM-VM anti-affinity rules for vSphere DRS to the virtual machines of the NSX Edge cluster. |
Keeps the NSX Edge nodes running on different ESXi hosts for high availability. |
None. |
VCF-WLD-NSX-EDGE-CFG-007 |
In vSphere HA, set the restart priority policy for each NSX Edge appliance to high. |
|
If the restart priority for another customer workload is set to highest, the connectivity delays for other virtual machines will be longer. |
VCF-WLD-NSX-EDGE-CFG-008 |
Configure all edge nodes as transport nodes. |
Enables the participation of edge nodes in the overlay network for delivery of services to the SDDC workloads such as routing and load balancing. |
None. |
VCF-WLD-NSX-EDGE-CFG-009 |
Create an NSX Edge cluster with the default Bidirectional Forwarding Detection (BFD) configuration between the NSX Edge nodes in the cluster. |
|
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-EDGE-CFG-010 |
Add the NSX Edge appliances to the virtual machine group for the first availability zone. |
Ensures that, by default, the NSX Edge appliances are powered on within the primary availability zone hosts group. |
None. |
NSX Edge Network Design
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-EDGE-NET-001 |
Connect the management interface |
Provides connection to the NSX Manager cluster. |
None. |
VCF-WLD-NSX-EDGE-NET-002 |
|
Because VLAN trunk port groups pass traffic for all VLANs, VLAN tagging can occur in the NSX Edge node itself for easy post-deployment configuration.
|
None. |
VCF-WLD-NSX-EDGE-NET-003 |
Use a single N-VDS in the NSX Edge nodes. |
|
None. |
VCF-WLD-NSX-EDGE-NET-004 |
Use a dedicated VLAN for the edge overlay network that is segmented from the host overlay VLAN. |
The edge overlay network must be isolated from the host overlay network to protect the host overlay traffic from edge-generated overlay traffic. |
|
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-EDGE-NET-005 |
Allocate a separate VLAN for edge RTEP overlay that is different from the edge overlay VLAN. |
|
You must allocate another VLAN in the data center infrastructure for edge RTEP overlay. |
NSX Edge Uplink Policy Design
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-EDGE-NET-006 |
Create one uplink profile for the edge node with three teaming policies.
|
|
None. |
Life Cycle Management Design
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-LCM-001 |
Use SDDC Manager to perform the life cycle management of NSX Manager and related components in the workload domain. |
Because the deployment scope of SDDC Manager covers the full SDDC stack, SDDC Manager performs patching, update, or upgrade of the workload domain as a single process. |
The operations team must understand and be aware of the impact of a patch, update, or upgrade operation by using SDDC Manager. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-LCM-FED-001 |
Use the upgrade coordinator in NSX-T Data Center to perform life cycle management on the NSX Global Manager appliances. |
The version of SDDC Manager in this design is not currently capable of life cycle operations (patching, update, or upgrade) for NSX Global Manager. |
You must always align the version of the NSX Global Manager nodes with the rest of the SDDC stack in VMware Cloud Foundation. You must explicitly plan upgrades of the NSX Global Manager nodes. An upgrade of the NSX Global Manager nodes might require a cascading upgrade of the NSX Local Manager nodes and underlying SDDC Manager infrastructure prior to the upgrade of the NSX Global Manager nodes.
An upgrade of the VI workload domain from SDDC Manager might include an upgrade of the NSX Local Manager cluster which might require an upgrade of the NSX Global Manager cluster. An upgrade of NSX Global Manager might then require that you upgrade all other VI workload domains connected to it before you can proceed with upgrading the NSX Global Manager instance.
|
VCF-WLD-NSX-LCM-FED-002 |
Establish an operations practice to ensure that prior to the upgrade of any VI workload domain, the impact of any version upgrades is evaluated against the need to upgrade NSX Global Manager. |
The versions of NSX Global Manager and NSX Local Manager nodes must be compatible with each other. Because the version of SDDC Manager in this design does not provide of life cycle operations (patching, update, or upgrade) for the NSX Global Manager nodes, upgrade to an unsupported version cannot be prevented. |
The administrator must establish and follow an operational practice by using a runbook or automated process to ensure a fully supported and compliant bill of materials prior to any upgrade operation. |
VCF-WLD-NSX-LCM-FED-003 |
Establish an operations practice to ensure that prior to the upgrade of the NSX Global Manager, the impact of any version change is evaluated against the existing NSX Local Manager nodes and VI workload domains. |
The versions of NSX Global Manager and NSX Local Manager nodes must be compatible with each other. Because the version of SDDC Manager in this design does not provide of life cycle operations (patching, update, or upgrade) for the NSX Global Manager nodes, upgrade to an unsupported version cannot be prevented. |
The administrator must establish and follow an operational practice by using a runbook or automated process to ensure a fully supported and compliant bill of materials prior to any upgrade operation. |
Routing Design for a Single VMware Cloud Foundation Instance
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-SDN-001 |
Deploy an active-active Tier-0 gateway. |
Supports ECMP north-south routing on all Edge nodes in the NSX Edge cluster. |
Active-active Tier-0 gateways cannot provide stateful services such as NAT. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-SDN-002 |
To enable ECMP between the Tier-0 gateway and the Layer 3 devices (ToR switches or upstream devices), create two VLANs. The ToR switches or upstream Layer 3 devices have an SVI on one of the two VLANs and each NSX Edge node in the cluster has an interface on each VLAN. |
Supports multiple equal-cost routes on the Tier-0 gateway and provides more resiliency and better bandwidth use in the network. |
Additional VLANs are required. |
VCF-WLD-NSX-SDN-003 |
Assign a named teaming policy to the VLAN segments to the Layer 3 device pair. |
Pins the VLAN traffic on each segment to its target Edge node interface. From there the traffic is directed to the host physical NIC that is connected to the target top of rack switch. |
None. |
VCF-WLD-NSX-SDN-004 |
Create a VLAN transport zone for edge uplink traffic. |
Enables the configuration of VLAN segments on the N-VDS in the edge nodes. |
Additional VLAN transport zones are required if the edge nodes are not connected to the same top of rack switch pair. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-SDN-005 |
Use BGP as the dynamic routing protocol. |
|
In environments where BGP cannot be used, you must configure and manage static routes. |
VCF-WLD-NSX-SDN-006 |
Configure the BGP Keep Alive Timer to 4 and Hold Down Timer to 12 between the top of rack switches and the Tier-0 gateway. These timers must be aligned with the data center fabric design of your organization. |
Provides a balance between failure detection between the top of rack switches and the Tier-0 gateway and overburdening the top of rack switches with keep-alive traffic. |
By using longer timers to detect if a router is not responding, the data about such a router remains in the routing table longer. As a result, the active router continues to send traffic to a router that is down. |
VCF-WLD-NSX-SDN-007 |
Do not enable Graceful Restart between BGP neighbors. |
Avoids loss of traffic. On the Tier-0 gateway, BGP peers from all the gateways are always active. On a failover, the Graceful Restart capability increases the time a remote neighbor takes to select an alternate Tier-0 gateway. As a result, BFD-based convergence is delayed. |
None. |
VCF-WLD-NSX-SDN-008 |
Enable helper mode for Graceful Restart mode between BGP neighbors. |
Avoids loss of traffic. During a router restart, helper mode works with the graceful restart capability of upstream routers to maintain the forwarding table which in turn will forward packets to a down neighbor even after the BGP timers have expired causing loss of traffic. |
None. |
VCF-WLD-NSX-SDN-009 |
Enable Inter-SR iBGP routing. |
In the event that an edge node has all of its northbound eBGP sessions down, north-south traffic will continue to flow by routing traffic to a different edge node. |
None. |
Decision ID |
Design Decision |
Design Implication |
Design Justification |
---|---|---|---|
VCF-WLD-NSX-SDN-010 |
Deploy a Tier-1 gateway and connect it to the Tier-0 gateway. |
Creates a two-tier routing architecture. Abstracts the NSX logical components which interact with the physical data center from the logical components which provide SDN services. |
A Tier-1 gateway can only be connected to a single Tier-0 gateway. In cases where multiple Tier-0 gateways are required, you must create multiple Tier-1 gateways. |
VCF-WLD-NSX-SDN-011 |
Deploy a Tier-1 gateway to the NSX-T Edge cluster. |
Enables stateful services, such as load balancers and NAT, for SDDC management components. Because a Tier-1 gateway always works in active-standby mode, the gateway supports stateful services. |
None. |
VCF-WLD-NSX-SDN-012 |
Deploy a Tier-1 gateway in non-preemptive failover mode. |
Ensures that after a failed NSX-T Edge transport node is back online, it does not take over the gateway services thus causing a short service outage. |
None. |
VCF-WLD-NSX-SDN-013 |
Enable standby relocation of the Tier-1 gateway. |
Ensures that if an edge failure occurs, a standby Tier-1 gateway is created on another edge node. |
None. |
Decision ID |
Design Decision |
Design Implication |
Design Justification |
---|---|---|---|
VCF-WLD-NSX-SDN-014 |
Extend the uplink VLANs to the top of rack switches so that the VLANs are stretched between both availability zones. |
Because the NSX Edge nodes will fail over between the availability zones, ensures uplink connectivity to the top of rack switches in both availability zones regardless of the zone the NSX Edge nodes are presently in. |
You must configure a stretched Layer 2 network between the availability zones by using physical network infrastructure. |
VCF-WLD-NSX-SDN-015 |
Provide this SVI configuration on the top of the rack switches or upstream Layer 3 devices.
|
Enables the communication of the NSX Edge nodes to the top of rack switches in both availability zones over the same uplink VLANs. |
You must configure a stretched Layer 2 network between the availability zones by using the physical network infrastructure. |
VCF-WLD-NSX-SDN-016 |
Provide this VLAN configuration.
|
Supports multiple equal-cost routes on the Tier-0 gateway, and provides more resiliency and better bandwidth use in the network. |
Extra VLANs are required. Requires stretching uplink VLANs between Availability zones |
VCF-WLD-NSX-SDN-017 |
Create an IP prefix list that permits access to route advertisement by |
Used in a route map to prepend a path to one or more autonomous system (AS-path prepend) for BGP neighbors in Availability Zone 2. |
You must manually create an IP prefix list that is identical to the default one. |
VCF-WLD-NSX-SDN-018 |
Create a route map-out that contains the custom IP prefix list and an AS-path prepend value set to the Tier-0 local AS added twice. |
|
You must manually create the route map. The two NSX Edge nodes will route north-south traffic through the second availability zone only if the connection to their BGP neighbors in the first availability zone is lost, for example, if a failure of the top of the rack switch pair or in the availability zone occurs. |
VCF-WLD-NSX-SDN-019 |
Create an IP prefix list that permits access to route advertisement by network |
Used in a route map to configure local-reference on learned default-route for BGP neighbors in the second availability zone. |
You must manually create an IP prefix list that is identical to the default one. |
VCF-WLD-NSX-SDN-020 |
Apply a route map-in that contains the IP prefix list for the default route |
|
You must manually create the route map. The two NSX Edge nodes will route north-south traffic through the second availability zone only if the connection to their BGP neighbors in the first availability zone is lost, for example, if a failure of the top of the rack switch pair or in the availability zone occurs. |
VCF-WLD-NSX-SDN-021 |
Configure the neighbors of the second availability zone to use the route maps as In and Out filters respectively. |
Makes the path in and out of the second availability zone less preferred because the AS path is longer. As a result, all traffic passes through the first zone. |
The two NSX Edge nodes will route north-south traffic through the second availability zone only if the connection to their BGP neighbors in the first availability zone is lost, for example, if a failure of the top of the rack switch pair or in the availability zone occurs. |
Routing Design for Multiple VMware Cloud Foundation Instances
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-SDN-FED-001 |
Extend the VI workload domain active-active Tier-0 gateway to the second VMware Cloud Foundation instance. |
|
Active-active Tier-0 gateways cannot provide stateful services such as NAT. |
VCF-WLD-NSX-SDN-FED-002 |
Set the Tier-0 gateway as primary for all VMware Cloud Foundation instances. |
|
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-SDN-FED-003 |
From the global Tier-0 gateway, establish BGP neighbor peering to the ToR switches connected to the second VMware Cloud Foundation instances. |
|
None. |
Overlay Design
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-SDN-022 |
Enable all ESXi hosts in the VI workload domain as transport nodes in NSX-T Data Center. |
Enables distributed routing, logical segments, and distributed firewall. |
None. |
VCF-WLD-NSX-SDN-023 |
Configure each ESXi host as a transport node without using transport node profiles. |
|
You must configure each transport node with an uplink profile individually. |
Decision ID | Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-SDN-024 | Use DHCP to assign IP addresses to the host TEP interfaces. |
Required for deployments where a cluster spans Layer 3 network domains such as multiple availability zones and clusters in the VI workload domain that span Layer 3 domains. |
DHCP server is required for the host overlay VLANs. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-SDN-025 |
Use a vSphere Distributed Switch for the shared edge and workload cluster that is enabled for NSX-T Data Center. |
To use features such as distributed routing, customer workloads must be connected to NSX segments. |
Management occurs jointly from the vSphere Client to NSX Manager. However, you must perform all network monitoring in the NSX Manager user interface or another solution. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-SDN-026 |
To provide virtualized network capabilities to customer workloads, use overlay networks with NSX Edge nodes and distributed routing. |
|
Requires configuring transport networks with an MTU size of at least 1,700 bytes. |
Decision ID |
Design Decision |
Design Implication |
Design Justification |
---|---|---|---|
VCF-WLD-NSX-SDN-027 |
Create a single overlay transport zone for all overlay traffic across the VI workload domain and NSX Edge nodes. |
|
None. |
VCF-WLD-NSX-SDN-028 |
Create a single VLAN transport zone for uplink VLAN traffic that is applied only to NSX Edge nodes. |
Ensures that uplink VLAN segments are configured on the NSX Edge transport nodes. |
If VLAN segments are needed on hosts, you must create another VLAN transport zone for the host transport nodes only. |
Decision ID |
Design Decision |
Decision Justification |
Decision Implication |
---|---|---|---|
VCF-WLD-NSX-SDN-029 |
Create an uplink profile with the load balance source teaming policy with two active uplinks for ESXi hosts. |
For increased resiliency and performance, supports the concurrent use of both physical NICs on the ESXi hosts that are configured as transport nodes. |
None. |
Decision ID |
Design Decision |
Design Justification |
Design Implications |
---|---|---|---|
VCF-WLD-NSX-SDN-030 |
Use hierarchical two-tier replication on all NSX-T overlay segments. |
Hierarchical two-tier replication is more efficient by reducing the number of ESXi hosts the source ESXi host must replicate traffic to. |
None. |
Information Security and Access Control Design
Decision ID |
Design Decision |
Design Implication |
Design Justification |
---|---|---|---|
VCF-WLD-NSX-SEC-001 |
Replace the default self-signed certificate of the NSX Manager instance for the VI workload domain with a certificate that is signed by a third-party certificate authority. |
Ensures that the communication between NSX-T Data Center administrators and the NSX Manager instance is encrypted by using a trusted certificate. |
Replacing the default certificates with trusted CA-signed certificates from a certificate authority might increase the deployment preparation time because you must generate and submit certificates requests. |
VCF-WLD-NSX-SEC-002 |
Use a SHA-2 algorithm or stronger when signing certificates. |
The SHA-1 algorithm is considered less secure and has been deprecated. |
Not all certificate authorities support SHA-2. |
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-WLD-NSX-SEC-FED-001 |
Replace the default self- signed certificate of the NSX Global Manager instance for the VI workload domain with a certificate that is signed by a third- party certificate authority. |
Ensures that the communication between NSX-T Data Center administrators and the NSX Global Manager instance is encrypted by using a trusted certificate. |
Replacing the default certificates with trusted CA- signed certificates from a certificate authority might increase the deployment preparation time because you must generate and submit certificates requests. |
VCF-WLD-NSX-SEC-FED-002 |
Establish an operations practice to capture and update on the NSX Global Manager the thumbprint of the NSX Local Manager certificate every time the certificate is updated by using SDDC Manager. |
Ensures secured connectivity between the NSX Manager instances. Each certificate has its own unique thumbprint. The NSX Global Manager stores the unique thumbprint of the NSX Local Manager instances for enhanced security. If an authentication failure between the NSX Global Manager and NSX Local Manager occurs, objects that are created from the NSX Global Manager will not be propagated to the SDN. |
The administrator must establish and follow an operational practice by using a runbook or automated process to ensure that the thumbprint up-to-date. |