The following sections describe the components in the solution and how they are relevant to the network virtualization design.

NSX-T Manager

NSX-T Manager provides the graphical user interface (GUI) and the RESTful API for creating, configuring, and monitoring NSX-T components, such as logical switches.

NSX-T Manager implements the management plane for the NSX-T infrastructure. NSX-T Manager provides an aggregated system view and is the centralized network management component of NSX-T. It provides a method for monitoring and troubleshooting workloads attached to virtual networks. It provides configuration and orchestration of the following services:

  • Logical networking components, such as logical switching and routing

  • Networking and edge services

  • Security services and distributed firewall

NSX-T Manager also provides a RESTful API endpoint to automate consumption. Because of this architecture, you can automate all configuration and monitoring operations using any cloud management platform, security vendor platform, or automation framework.

The NSX-T Management Plane Agent (MPA) is an NSX-T Manager component that is available on each ESXi host. The MPA is in charge of persisting the desired state of the system and for communicating non-flow-controlling (NFC) messages such as configuration, statistics, status, and real time data between transport nodes and the management plane.

Table 1. NSX-T Manager Design Decisions

Decision ID

Design Decision

Design Justification

Design Implications

NSXT-VI-SDN-003

Deploy NSX-T Manager as a large size virtual appliance.

The large-size appliance supports more than 64 ESXi hosts. The small-size appliance is for proof of concept and the medium size only supports up to 64 ESXi hosts.

The large size requires more resources in the management cluster.

NSXT-VI-SDN-004

  • Grant administrators access to both the NSX-T Manager UI and its RESTful API endpoint.

  • Restrict end-user access to the RESTful API endpoint configured for end-user provisioning, such as vRealize Automation or Pivotal Container Service (PKS).

Ensures that tenants or non-provider staff cannot modify infrastructure components.

End-users typically interact only indirectly with NSX-T from their provisioning portal. Administrators interact with NSX-T using its UI and API.

End users have access only to end-point components.

NSX-T Controller

An NSX-T Controller controls virtual networks and overlay transport tunnels.

For stability and reliability of data transport, the NSX-T Controller is deployed as a cluster of three highly available virtual appliances that are responsible for the programmatic deployment of virtual networks across the entire NSX-T architecture.

The CCP is logically separated from all data plane traffic, that is, a failure in the control plane does not affect existing data plane operations. The controller provides configuration to other NSX-T Controller components such as the logical switches, logical routers, and edge virtual machine configuration.

Table 2. NSX-T Controller Design Decision

Decision ID

Design Decision

Design Justification

Design Implications

NSXT-VI-SDN-005

Deploy the NSX-T Controller cluster in the management cluster with three members for high availability and scale.

The high availability of the NSX-T Controllers reduces the downtime period if a failure of one physical ESXi host occurs.

None.

NSX-T Virtual Distributed Switch

An NSX-T Virtual Distributed Switch (N-VDS) runs on ESXi hosts and provides physical traffic forwarding. It transparently provides the underlying forwarding service that each logical switch relies on. To achieve network virtualization, a network controller must configure the ESXi host virtual switch with network flow tables that form the logical broadcast domains the tenant administrators define when they create and configure logical switches.

NSX-T implements each logical broadcast domain by tunneling VM-to-VM traffic and VM-to-logical router traffic using the Geneve tunnel encapsulation mechanism. The network controller has a global view of the data center and ensures that the ESXi host virtual switch flow tables are updated as VMs are created, moved, or removed.

Table 3. NSX-T N-VDS Design Decision

Decision ID

Design Decision

Design Justification

Design Implications

NSXT-VI-SDN-006

Deploy an N-VDS instance to each ESXi host in the shared edge and compute cluster.

ESXi hosts in the shared edge and compute cluster provide tunnel endpoints for Geneve overlay encapsulation.

None.

Logical Switching

NSX-T logical switches create logically abstracted segments to which you can connect tenant workloads. A single logical switch is mapped to a unique Geneve segment that is distributed across the ESXi hosts in a transport zone. The logical switch supports line-rate switching in the ESXi host without the constraints of VLAN sprawl or spanning tree issues.

Table 4. NSX-T Logical Switching Design Decision

Decision ID

Design Decision

Design Justification

Design Implications

NSXT-VI-SDN-007

Deploy all workloads on NSX-T logical switches.

To take advantage of features such as distributed routing, tenant workloads must be connected to NSX-T logical switches.

You must perform all network monitoring in the NSX-T Manager UI or vRealize Network Insight.

Logical Routers

NSX-T logical routers provide North-South connectivity so that workloads can access external networks, and East-West connectivity between different logical networks.

A logical router is a configured partition of a traditional network hardware router. It replicates the functionality of the hardware, creating multiple routing domains in a single router. Logical routers perform a subset of the tasks that are handled by the physical router, and each can contain multiple routing instances and routing tables. Using logical routers can be an effective way to maximize router use, because a set of logical routers within a single physical router can perform the operations previously performed by several pieces of equipment.

  • Distributed router (DR)

    A DR spans ESXi hosts whose virtual machines are connected to this logical router, and edge nodes the logical router is bound to. Functionally, the DR is responsible for one-hop distributed routing between logical switches and logical routers connected to this logical router.

  • One or more (optional) service routers (SR).

    An SR is responsible for delivering services that are not currently implemented in a distributed fashion, such as stateful NAT.

A logical router always has a DR. A logical router has SRs when it is a Tier-0 router, or when it is a Tier-1 router and has routing services configured such as NAT or DHCP.

Tunnel Endpoint

Tunnel endpoints enable ESXi hosts to participate in an NSX-T overlay. The NSX-T overlay deploys a Layer 2 network on top of an existing Layer 3 network fabric by encapsulating frames inside packets and transferring the packets over an underlying transport network. The underlying transport network can be another Layer 2 networks or it can cross Layer 3 boundaries. The Tunnel Endpoint (TEP) is the connection point at which the encapsulation and decapsulation take place.

NSX-T Edges

NSX-T Edges provide routing services and connectivity to networks that are external to the NSX-T deployment. You use an NSX-T Edge for establishing external connectivity from the NSX-T domain by using a Tier-0 router using BGP or static routing. Additionally, you deploy an NSX-T Edge to support network address translation (NAT) services at either the Tier-0 or Tier-1 logical routers.

The NSX-T Edge connects isolated, stub networks to shared uplink networks by providing common gateway services such as NAT, and dynamic routing.

Logical Firewall

NSX-T uses handles traffic in and out the network according to firewall rules.

A logical firewall offers multiple sets of configurable Layer 3 and Layer 2 rules. Layer 2 firewall rules are processed before Layer 3 rules. You can configure an exclusion list to exclude logical switches, logical ports, or groups from firewall enforcement.

The default rule, that is located at the bottom of the rule table, is a catchall rule. The logical firewall enforces the default rule on packets that do not match other rules. After the host preparation operation, the default rule is set to the allow action. Change this default rule to a block action and enforce access control through a positive control model, that is, only traffic defined in a firewall rule can flow on the network.

Logical Load Balancer

The NSX-T logical load balancer offers high-availability service for applications and distributes the network traffic load among multiple servers.

The load balancer accepts TCP, UDP, HTTP, or HTTPS requests on the virtual IP address and determines which pool server to use.

Logical load balancer is supported only on the Tier-1 logical router.