The common NSX concepts that are used in the documentation and user interface.
- Compute Manager
- A compute manager is an application that manages resources such as hosts and VMs. NSX supports VMware vCenter as a compute manager.
- Control Plane
- Computes runtime state based on configuration from the management plane. Control plane disseminates topology information reported by the data plane elements, and pushes stateless configuration to forwarding engines (transport nodes). NSX control plane is split into two components - the Central Control Plane (CCP) and the Local Control Plane (LCP). The CCP is implemented on NSX Manager Cluster, while the LCP on all of the NSX transport nodes.
- Corfu services
- Run on each NSX Manager node to create the highly available distributed datastore Corfu.
- Data Plane
- Performs stateless forwarding or transformation of packets based on tables populated by the control plane. Data plane reports topology information to the control plane and maintains packet level statistics. Data plane is implemented by NSX transport nodes.
- Data Processing Unit (DPU)
-
A DPU device is a SmartNIC device, or a high-performance network interface card, with added embedded CPU cores, memory, and a hypervisor running on the device independently from the ESXi hypervisor installed on the server.
Note: We will refer to SmartNIC as DPU across our user guides.
- External Network
- A physical network or VLAN not managed by NSX. You can link your logical network or overlay network to an external network through a Tier-0 Gateway, Tier-1 Gateway or L2 bridge.
- External Interface
- Tier-0 Gateway Interface connecting to the physical infrastructure or router. Static routing and BGP are supported on this interface. This interface was referred to as uplink interface in previous releases.
- Logical Port Egress
- Outbound network traffic leaving the VM or logical network is called egress because traffic is leaving virtual network and entering the data center.
- Logical Port Ingress
- Inbound network traffic entering the VM is called ingress traffic.
- Gateway
- NSX routing entity that provides connectivity between different L2 networks. Configuring a gateway through NSX Manager instantiates a gateway (Tier-0 or Tier-1) on transport nodes and provides optimized distributed routing as well as centralized routing and services like NAT, Load balancer, DHCP and other supported services on each hypervisor.
- Gateway Port
- Logical network port to which you can attach a logical switch port or an uplink port to a physical network.
- Segment Port
- Logical switch attachment point to establish a connection to a virtual machine network interface, container, physical appliances or a gateway interface. The segment port reports applied switching profile, port state, and link status.
- Management Plane
- Provides single API entry point to the system, persists user configuration, handles user queries, and performs operational tasks on all of the management, control, and data plane nodes in the system.
- NSX Edge Cluster
- Is a collection of NSX Edge node appliances that have the same settings and provide high availability if one of the NSX Edge node fails.
- NSX Edge Node
- Edge nodes are service appliances (Bare Metal or VM form factor) with pools of capacity, dedicated to running network and security services that cannot be distributed to the hypervisors.
- NSX Managed Virtual Distributed Switch (N-VDS, host-switch)
-
The
NSX managed virtual distributed switch forwards traffic between logical and physical ports of the device. On
ESXi hosts, the N-VDS implementation is derived from VMware vSphere
® Distributed Switch™ (VDS) and it shows up as an opaque network in vCenter. With any other kind of transport node (KVM hypervisors, Edges, Bare Metal servers, cloud VMs and so on) the N-VDS implementation is derived from the Open vSwitch (OVS).
Note: VMware has removed support of the NSX N-VDS virtual switch on ESXi hosts starting release 4.0.0.1 because it is recommended to deploy NSX on top of vCenter VDS. N-VDS will remain the supported virtual switch on NSX Edge nodes, native public cloud NSX agents, and Bare Metal workloads.
- vSphere Distributed Switch (VDS)
-
Starting with NSX 3.0, NSX can run directly on top of a vSphere Distributed Switch version 7 or later. It is recommended that you use the VDS switch for deployment of NSX on ESXi hosts. You can create Overlay or VLAN backed segments on VDS switches, similar to the N-VDS, VDS switches that can be configured in Standard or Enhanced Datapath mode.
- vSphere Distributed Services Engine
- Sphere 8.0 introduces VMware vSphere Distributed Services Engine, which leverages data processing units (DPUs) as the new hardware technology to overcome the limits of core CPU performance while delivering zero-trust security and simplified operations to vSphere environments. With NSX 4.0.1.1, vSphere Distributed Services Engine provides the ability to offload some of the network operations from your server CPU to a DPU.
- NSX Manager
- Node that hosts the API services, the management plane, the control plane and the agent services. It is accessible through CLI, Web UI, or API. NSX Manager is an appliance included in the NSX installation package. You can deploy the appliance in the role of NSX Manager or nsx-cloud-service-manager. Currently, the appliance only supports one role at a time.
- NSX Manager Cluster
-
A cluster of NSX Manager virtual machine appliances providing high availability of the user interface and the API.
- Open vSwitch (OVS)
- Open source software switch that acts as a virtual switch within XenServer, Xen, and other Linux-based hypervisors.
- Opaque Network
-
An opaque network is a network created and managed by a separate entity outside of vSphere. For example, logical networks that are created and managed by N-VDS switch running on NSX appear in vCenter Server as opaque networks of the type nsx.LogicalSwitch. You can choose an opaque network as the backing for a VM network adapter. To manage an opaque network, use the management tools associated with the opaque network, such as NSX Manager or the NSX API management tools.
- Overlay Logical Network
- Logical network implemented using GENEVE encapsulation protocol as mentioned in https://www.rfc-editor.org/rfc/rfc8926.txt. The topology seen by VMs is decoupled from that of the physical network.
- Physical Interface (pNIC)
- Network interface on a physical server that a hypervisor is installed on.
- Segment
-
Previously known as logical switch. It is an entity that provides virtual Layer 2 switching for VM interfaces and Gateway interfaces. A segment gives tenant network administrators the logical equivalent of a physical Layer 2 switch, allowing them to connect a set of VMs to a common broadcast domain. A segment is a logical entity independent of the physical hypervisor infrastructure and spans many hypervisors, connecting VMs regardless of their physical location.
In a multi-tenant cloud, many segments might exist side-by-side on the same hypervisor hardware, with each Layer 2 segment isolated from the others. Segments can be connected using gateways, which can provide connectivity to the external physical network.
- Service Interface
- Tier-0 Interface connecting VLAN segments to provide connectivity and services to VLAN backed physical or virtual workloads. A service interface can also be connected to overlay segments for Tier-1 standalone load balancer use cases. Starting with NSX release version 3.0, service interface supports static and dynamic routing.
- Tier-0 Gateway
-
A Tier-0 Gateway provides north-south connectivity and connects to the physical routers. It can be configured as an active-active or active-standby cluster. The Tier-0 Gateway runs BGP and peers with physical routers.
Tier-0 Gateways consists of two components:
- distributed routing component (DR) that runs on all transport nodes. DR of the Tier-0 gateway is instantiated on the hypervisors and Edge transport nodes upon its creation.
-
centralized services routing component (SR) runs on edge cluster nodes. SR gets instantiated on the edge nodes upon association of gateway with edge cluster and creation of external interfaces.
- Tier-1 Gateway
- A Tier-1 Gateway connects to one Tier-0 Gateway for northbound connectivity of the subnetworks attached to it (multi-tier routing model). It connects to one or more overlay networks for southbound connectivity to its subnetworks. A Tier-1 Gateway can be configured as an active-standby cluster. Like Tier-0 gateway, when a Tier-1 gateway is created, a distributed component (DR) of the Tier-1 gateway is instantiated on the hypervisors and Edge transport nodes but the service component (SR) will only be created if gateways is associated with edge cluster and external interfaces created.
- Transport Zone
- Collection of transport nodes that defines the maximum span for logical switches. A transport zone represents a set of similarly provisioned hypervisors and the logical switches that connect VMs on those hypervisors. It also has been registered with the NSX management plane and has NSX modules installed. For a hypervisor host or NSX Edge to be part of the NSX overlay, it must be added to the NSX transport zone.
- Transport Node
- A fabric node is prepared as a transport node so that it becomes capable of participating in an NSX overlay or NSX VLAN networking. For an ESXi host, you must configure a VDS switch.
- Uplink Profile (host-switch-profile)
- Defines policies for the links from transport nodes to NSX segments or from NSX Edge nodes to top-of-rack switches. The settings defined by uplink profiles might include teaming policies, the transport VLAN ID, and the MTU setting. The transport VLAN set in the uplink profile tags overlay traffic only and the VLAN ID is used by the TEP endpoint.
- VM Interface (vNIC)
- Network interface on a virtual machine that provides connectivity between the virtual guest operating system and the standard vSwitch or NSX segment. The vNIC can be attached to a logical port. You can identify a vNIC based on its Unique ID (UUID).
- Tunnel Endpoint (TEP)
- Each transport node has a Tunnel Endpoint (TEP) responsible for encapsulating the overlay VM traffic inside a VLAN header and routing the packet to a destination TEP for further processing. TEPs are the source and destination IP addresses used in the external IP header to identify the ESXi hosts that originate and end the NSX encapsulation of overlay frames. Traffic can be routed to another TEP on a different host or the NSX Edge gateway to access the physical network. TEPs create a GENEVE tunnel between the source and destination endpoints.