By using NSX-T Data Center, virtualization delivers for networking what it has already delivered for compute and storage.
The server virtualization programmatically creates, takes snapshots of, deletes, and restores software-based VMs. Similarly, the NSX-based network virtualization programmatically creates, takes snapshots of, deletes, and restores software-based virtual networks. As a result, you follow a simplified operational model for the underlying physical network.
NSX-T Data Center is a non-disruptive solution. You can deploy it on any IP network such as traditional networking models and next-generation fabric architectures, regardless of the vendor. This is accomplished by decoupling the virtual networks from their physical counterparts.
NSX Manager
NSX Manager is the centralized network management component of NSX-T Data Center. It implements the management and control plane for the NSX infrastructure.
NSX Manager provides the following:
-
The Graphical User Interface (GUI) and the RESTful API for creating, configuring, and monitoring NSX-T components, such as segments and gateways.
-
An aggregated system view.
-
A method for monitoring and troubleshooting workloads attached to virtual networks.
-
Configuration and orchestration of the following services:
-
Logical networking components, such as logical switching and routing
-
Networking and edge services
-
Security services and distributed firewall
-
-
A RESTful API endpoint to automate consumption. Because of this architecture, you can automate all configuration and monitoring operations using any cloud management platform, security vendor platform, or automation framework.
Some of the components of the NSX Manager are as follows:
-
NSX Management Plane Agent (MPA): Available on each ESXi host. The MPA persists the desired state of the system and communicates Non-Flow-Controlling (NFC) messages such as configuration, statistics, status, and real-time data between transport nodes and the management plane.
-
NSX Controller: Controls the virtual networks and overlay transport tunnels. The controllers are responsible for the programmatic deployment of virtual networks across the entire NSX-T architecture.
-
Central Control Plane (CCP): Logically separated from all data plane traffic. A failure in the control plane does not affect existing data plane operations. The controller provides configuration to other NSX Controller components such as the segments, gateways, and edge VM configuration.
NSX Virtual Distributed Switch
NSX-managed Virtual Distributed Switch (N-VDS) runs on ESXi hosts and provides physical traffic forwarding. It provides the underlying forwarding service that each segment relies on. To implement network virtualization, a network controller must configure the ESXi host virtual switch with network flow tables. The network flow tables form the logical broadcast domains the tenant administrators define when they create and configure segments.
NSX-T Data Center implements each logical broadcast domain by tunneling VM-to-VM traffic and VM-to-gateway traffic using the Geneve tunnel encapsulation mechanism. The network controller has a global view of the data center and ensures that the virtual switch flow tables in the ESXi host are updated as the VMs are created, moved, or removed.
NSX-T Data Center implements N-VDS in Standard and Enhanced modes. Enhanced data path is a networking stack mode that provides superior network performance. The N-VDS switch can be configured in the enhanced data path mode only on an ESXi host. N-VDS(E) also supports traffic flowing through Edge VMs. You can configure the overlay traffic and VLAN traffic in the enhanced data path mode.
With N-VDS configured in the enhanced data path mode, if a single logical core is associated with a vNIC, the logical core processes bidirectional traffic of a vNIC. When multiple logical cores are configured, the host automatically determines which logical core must process a vNIC's traffic.
Transport Zones
Transport zones determine which hosts can use a particular network. A transport zone identifies the type of traffic, such as VLAN or overlay, and the N-VDS name. You can configure one or more transport zones. A transport zone does not represent a security boundary.
Logical Switching
NSX Segments create logically abstracted segments to which the workloads can be connected. A single NSX Segment is mapped to a unique Geneve segment ID that is distributed across the ESXi hosts in a transport zone. NSX Segments support switching in the ESXi host without the constraints of VLAN sprawl or spanning tree issues.
Gateways
NSX Gateways provide the North-South connectivity for the workloads to access external networks and East-West connectivity between different logical networks.
A gateway is a configured partition of a traditional network hardware router. It replicates the functionality of the hardware, creating multiple routing domains in a single router. Gateways perform a subset of the tasks that are handled by the physical router. Each gateway can contain multiple routing instances and routing tables. Using gateways can be an effective way to maximize the router use.
-
Distributed Router: A Distributed Router (DR) spans ESXi hosts whose VMs are connected to this gateway and edge nodes the gateway is bound to. Functionally, the DR is responsible for one-hop distributed routing between segments and other gateways connected to this gateway.
-
Service Router: A Service Router (SR) such as stateful Network Address Translation (NAT) delivers services that are not currently implemented in a distributed fashion. A Gateway always has a DR. A Gateway has SRs when it is a Tier-0 Gateway or a Tier-1 Gateway and has services configured such as load balancing, NAT, or Dynamic Host Configuration Protocol (DHCP).
Design Decision |
Design Justification |
Design Implication |
---|---|---|
Deploy a three-node NSX Manager cluster using the large-sized appliance to configure and manage all NSX-based compute clusters. |
The large-sized appliance supports more than 64 ESXi hosts. The small-sized appliance is for proof of concept and the medium size supports up to 64 ESXi hosts only. |
The large-sized deployment requires more resources in the management cluster. |
Create a VLAN and Overlay Transport zone. |
Ensures that all Segments are available to all ESXi hosts and edge VMs configured as Transport Nodes. |
None |
Deploy an N-VDS in enhanced data path mode to each NSX compute cluster. |
Provides a high-performance network stack for NFV workloads. |
|
Use large-sized NSX Edge VMs. |
The large-sized appliance provides the required performance characteristics, if a failure occurs. |
Large-sized Edges consume more CPU and memory resources. |
Deploy at least two large-sized NSX Edge VMs in the vSphere Edge Cluster. |
Creates the NSX Edge cluster to meet availability requirements. |
None |
Create an uplink profile with the load balance source teaming policy with two active uplinks for ESXi hosts. |
For increased resiliency and performance, supports the concurrent use of two physical NICs on the ESXi hosts by creating two TEPs. |
None |
Create a second uplink profile with the load balance source teaming policy with two active uplinks for Edge VMs. |
For increased resiliency and performance, supports the concurrent use of two virtual NICs on the Edge VMs by creating two TEPs. |
None |
Create a Transport Node Policy with the VLAN and Overlay Transport Zones, N-VDS(E) settings, and Physical NICs per site. |
Allows the profile to be assigned directly to the vSphere cluster and ensures consistent configuration across all ESXi hosts in the cluster. |
You must create all required Transport Zones before creating the Transport Node Policy. |
Create two VLANs to enable ECMP between the Tier-0 Gateway and the Layer 3 device (ToR or upstream device). The ToR switches or the upstream Layer 3 devices have an SVI on one of the two VLANS. Each edge VM has an interface on each VLAN. |
Supports multiple equal-cost routes on the Tier-0 Gateway and provides more resiliency and better bandwidth use in the network. |
Extra VLANs are required. |
Deploy an Active-Active Tier-0 Gateway. |
Supports ECMP North-South routing on all edge VMs in the NSX Edge cluster. |
Active-Active Tier-0 Gateways cannot provide services such as NAT. If you deploy a specific solution that requires stateful services on the Tier-0 Gateway, you must deploy a Tier-0 Gateway in Active-Standby mode. |
Deploy a Tier-1 Gateway to the NSX Edge cluster and connect it to the Tier-0 Gateway. |
Creates a two-tier routing architecture that supports load balancers and NAT. Because the Tier-1 is always Active/Standby, creation of services such as load balancers or NAT is possible. |
None |
Deploy Tier-1 Gateways with the Non-Preemptive setting. |
Ensures that when the failed Edge Transport Node comes back online, it does not move services back to itself resulting in a small service outage. |
None |
Replace the certificate of the NSX Manager instances with a certificate that is signed by a third-party Public Key Infrastructure. |
Ensures that the communication between NSX administrators and the NSX Manager instance is encrypted using a trusted certificate. |
Replacing and managing certificates is an operational overhead. |
Replace the NSX Manager cluster certificate with a certificate that is signed by a third-party Public Key Infrastructure. |
Ensures that the communication between the virtual IP address of the NSX Manager cluster and NSX administrators is encrypted by using a trusted certificate. |
Replacing and managing certificates is an operational overhead. |