The common NSX-T Data Center concepts that are used in the documentation and user interface.

Compute Manager
A compute manager is an application that manages resources such as hosts and VMs. One example is vCenter Server.
Control Plane
Computes runtime state based on configuration from the management plane. Control plane disseminates topology information reported by the data plane elements, and pushes stateless configuration to forwarding engines.
Data Plane
Performs stateless forwarding or transformation of packets based on tables populated by the control plane. Data plane reports topology information to the control plane and maintains packet level statistics.
External Network
A physical network or VLAN not managed by NSX-T Data Center. You can link your logical network or overlay network to an external network through a Tier-0 Gateway, Tier-1 Gateway or L2 bridge.
Logical Port Egress
Outbound network traffic leaving the VM or logical network is called egress because traffic is leaving virtual network and entering the data center.
Logical Port Ingress
Inbound network traffic entering the VM is called ingress traffic.
Gateway
NSX-T Data Center routing entity that provide connectivity between different L2 networks. Configuring a gateway through NSX Manager instantiates a gateway on each hypervisor.
Gateway Port
Logical network port to which you can attach a logical switch port or an uplink port to a physical network.
Segment Port
Logical switch attachment point to establish a connection to a virtual machine network interface, container, physical appliances or a gateway interface. The segment port reports applied switching profile, port state, and link status.
Management Plane
Provides single API entry point to the system, persists user configuration, handles user queries, and performs operational tasks on all of the management, control, and data plane nodes in the system. Management plane is also responsible for querying, modifying, and persisting user configuration.
NSX Edge Cluster
Is a collection of NSX Edge node appliances that have the same settings and provide high availability if one of the NSX Edge node fails.
NSX Edge Node
Edge nodes are service appliances with pools of capacity, dedicated to running network and security services that cannot be distributed to the hypervisors.
NSX Managed Virtual Distributed Switch or KVM Open vSwitch
The NSX managed virtual distributed switch (N-VDS, previously known as hostswitch)or OVS is used for shared NSX Edge and compute cluster. N-VDS is required for overlay traffic configuration.

An N-VDS has two modes: standard and enhanced datapath. An enhanced datapath N-VDS has the performance capabilities to support NFV (Network Functions Virtualization) workloads.

vSphere Distributed Switch (VDS)
Starting in vSphere 7.0, NSX-T Data Center supports VDS switches. You can create segments on VDS switches.
NSX Manager
Node that hosts the API services, the management plane, the control plane and the agent services. NSX Manager is an appliance included in the NSX-T Data Center installation package. You can deploy the appliance in the role of NSX Manager or nsx-cloud-service-manager. Currently, the appliance only supports one role at a time.
NSX Manager Cluster
A cluster of NSX Managers that can provide high availability.

Open vSwitch (OVS)
Open source software switch that acts as a virtual switch within XenServer, Xen, KVM, and other Linux-based hypervisors.
Opaque Network

An opaque network is a network created and managed by a separate entity outside of vSphere. For example, logical networks that are created and managed by N-VDS switch running on NSX-T Data Center appear in vCenter Server as opaque networks of the type nsx.LogicalSwitch. You can choose an opaque network as the backing for a VM network adapter. To manage an opaque network, use the management tools associated with the opaque network, such as NSX Manager or the NSX-T Data Center API management tools.

Overlay Logical Network
Logical network implemented using GENEVE encapsulation protocol as mentioned in https://www.rfc-editor.org/rfc/rfc8926.txt. The topology seen by VMs is decoupled from that of the physical network.
Physical Interface (pNIC)
Network interface on a physical server that a hypervisor is installed on.
Segment
Previously known as logical switch. It is an entity that provides virtual Layer 2 switching for VM interfaces and Gateway interfaces. A segment gives tenant network administrators the logical equivalent of a physical Layer 2 switch, allowing them to connect a set of VMs to a common broadcast domain. A segment is a logical entity independent of the physical hypervisor infrastructure and spans many hypervisors, connecting VMs regardless of their physical location.

In a multi-tenant cloud, many segments might exist side-by-side on the same hypervisor hardware, with each Layer 2 segment isolated from the others. Segments can be connected using gateways, which can provide connectivity to the external physical network.

Tier-0 Gateway
A Tier-0 Gateway provides north-south connectivity and connects to the physical routers. It can be configured as an active-active or active-standby cluster. The Tier-0 Gateway runs BGP and peers with physical routers.
Tier-1 Gateway
A Tier-1 Gateway connects to one Tier-0 Gateway for northbound connectivity to the subnetworks attached to it. It connects to one or more overlay networks for southbound connectivity to its subnetworks. A Tier-1 Gateway can be configured as an active-standby cluster.
Transport Zone
Collection of transport nodes that defines the maximum span for logical switches. A transport zone represents a set of similarly provisioned hypervisors and the logical switches that connect VMs on those hypervisors. It also has been registered with the NSX-T Data Center management plane and has NSX-T Data Center modules installed. For a hypervisor host or NSX Edge to be part of the NSX-T Data Center overlay, it must be added to the NSX-T Data Center transport zone.
Transport Node
A fabric node is prepared as a transport node so that it becomes capable of participating in an NSX-T Data Center overlay or NSX-T Data Center VLAN networking. For a KVM host, you can preconfigure the N-VDS or you can have NSX Manager perform the configuration. For an ESXi host, you can configure the host on a N-VDS or a VDS switch.
Uplink Profile
Defines policies for the links from transport nodes to NSX-T Data Center segments or from NSX Edge nodes to top-of-rack switches. The settings defined by uplink profiles might include teaming policies, the transport VLAN ID, and the MTU setting. The transport VLAN set in the uplink profile tags overlay traffic only and the VLAN ID is used by the TEP endpoint.
VM Interface (vNIC)
Network interface on a virtual machine that provides connectivity between the virtual guest operating system and the standard vSwitch or NSX-T segment. The vNIC can be attached to a logical port. You can identify a vNIC based on its Unique ID (UUID).
Tunnel Endpoint
Each transport node has a Tunnel Endpoint (TEP) responsible for encapsulating the overlay VM traffic inside a VLAN header and routing the packet to a destination TEP for further processing. TEPs are the source and destination IP addresses used in the external IP header to identify the ESXi hosts that originate and end the NSX-T encapsulation of overlay frames. Traffic can be routed to another TEP on a different host or the NSX Edge gateway to access the physical network. TEPs create a GENEVE tunnel between the source and destination endpoints.