A transport node profile is a template to define configuration that is applied to a cluster. It is not applied to prepare standalone hosts. Prepare vCenter Server cluster hosts as transport nodes by applying a transport node profile. Transport node profiles define transport zones, member hosts, N-VDS switch configuration including uplink profile, IP assignment, mapping of physical NICs to uplink virtual interfaces and so on.

Note: Transport node profiles are only applicable to hosts. It cannot be applied to NSX Edge transport nodes.

Transport node creation begins when a transport node profile is applied to a vCenter Server cluster. NSX Manager prepares the hosts in the cluster and installs the NSX-T Data Center components on all the hosts. Transport nodes for the hosts are created based on the configuration specified in the transport node profile.

On a cluster prepared with a transport node profile, these outcomes are true:

  • When you move an unprepared host into a cluster applied with a transport node profile, NSX-T Data Center automatically prepares the host as a transport node using the transport node profile.
  • When you move a transport node from the cluster to an unprepared cluster or directly as a standalone host under the data center, first the transport node configuration applied to the node is removed and then NSX-T Data Center VIBs are removed from the host. See Triggering Uninstallation from the vSphere Web Client.
To delete a transport node profile, you must first detach the profile from the associated cluster. The existing transport nodes are not affected. New hosts added to the cluster are no longer automatically converted into transport nodes.
Points to note when you create a Transport Node Profile:
  • You can add a maximum of four N-VDS or VDS switches for each configuration: enhanced N-VDS or VDS created for VLAN transport zone, standard N-VDS or VDS created for overlay transport zone, enhanced N-VDS or VDS created for overlay transport zone.
  • There is no limit on the number of standard N-VDS switches created for VLAN transport zone.
  • In a single host cluster topology running multiple standard overlay N-VDS switches and edge VM on the same host, NSX-T Data Center provides traffic isolation such that traffic going through the first N-VDS is isolated from traffic going through the second N-VDS and so on. The physical NICs on each N-VDS must be mapped to the edge VM on the host to allow the north-south traffic connectivity with the external world. Packets moving out of a VM on the first transport zone must be routed through an external router or an external VM to the VM on the second transport zone.
  • Each N-VDS switch name must be unique. NSX-T Data Center does not allow use of duplicate switch names.
  • Each transport zone ID associated with each N-VDS or VDS host in a transport node configuraiton or transport node profile configuration must be unique.

Prerequisites

Procedure

  1. From a browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-ip-address>.
  2. Select System > Fabric > Profiles > Transport Node Profiles > Add.
  3. Enter a name to identify the transport node profile.

    You can optionally add the description about the transport node profile.

  4. In the Add Transport Node Profile panel, expand New Node Switch.
  5. In the Type field, select between N-VDS and VDS as the host switch type to prepare the transport node.
  6. In the Mode field, depending upon the workload requirements, select the appropriate mode:
    • Standard mode that applies to all the supported hosts. It is used for regular workloads.
    • Enhanced Datapath is a networking stack mode that applies to only transport nodes of ESXi host version 6.7 and later type that can belong in a transport zone. It is used for telecom workloads that require relatively higher throughput and performance.
  7. Select the available transport zones and click the > button to include the transport zones in the transport node profile.
    Note: You can add multiple transport zones.
  8. Select N-VDS as the host switch type and enter the switch details. Skip to the next step to select VDS as the host switch.
    Option Description
    Name

    Enter a name for the N-VDS switch.

    Transport Zones

    Shows the transport zones that are realized by the associated host switches. You cannot add a transport zone if it is not realized by any N-VDS in the transport node profile.

    NIOC Profile Select the NIOC profile from the drop-down menu.

    The bandwidth allocations specified in the profile for the traffic resources are enforced.

    Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom uplink profile.

    You can also use the default uplink profile.

    LLDP Profile By default, NSX-T only receives LLDP packets from a LLDP neighbor.

    However, NSX-T can be set to send LLDP packets to and receive LLDP packets from a LLDP neighbor.

    IP Assignment Select Use DHCP, Use IP Pool, or Use Static IP List to assign an IP address to virtual tunnel endpoints (VTEPs) of the transport node.

    If you select Use Static IP List, you must specify a list of comma-separated IP addresses, a gateway, and a subnet mask. All the VTEPs of the transport node must be in the same subnet otherwise bidirectional flow (BFD) session is not established.

    IP Pool If you selected Use IP Pool for an IP assignment, specify the IP pool name.
    Physical NICs

    Add physical NICs to the transport node. You can use the default uplink or assign an existing uplink from the drop-down menu.

    Click Add PNIC to configure additional physical NICs to the transport node.

    Note: Migration of the physical NICs that you add in this field depends on how you configure PNIC only Migration, Network Mappings for Install, and Network Mappings for Uninstall.
    • To migrate a used physical NIC (for example, by a vSphere Standard Switch or a vSphere Distributed Switch) without an associated VMkernel mapping, ensure that PNIC only Migration is enabled. Otherwise, the transport node state remains in partial success, and the fabric node LCP connectivity fails to establish.
    • To migrate a used physical NIC with an associated VMkernel network mapping, disable PNIC only Migration and configure the VMkernel network mapping.

    • To migrate a free physical NIC, enable PNIC only Migration.

    PNIC only Migration

    Before setting this field, consider the following points:

    • Know whether the physical NIC defined is a used NIC or a free NIC.
    • Determine whether VMkernel interfaces of a host need to be migrated along with physical NICs.

    Set the field:

    • Enable PNIC only Migration if you only want to migrate physical NICs from a VSS or DVS switch to an N-VDS switch.

    • Disable PNIC only Migration if you want to migrate a used physical NIC and its associated VMkernel interface mapping. A free or available physical NIC is attached to the N-VDS switch when a VMkernel interface migration mapping is specified.

    On a host with multiple host switches:
    • If all host switches are to migrate only PNICs, then you can migrate the PNICs in a single operation.
    • If some hosts switches are to migrate VMkernel interfaces and the remaining host switches are to migrate only PNICs:
      1. In the first operation, migrate only PNICs.
      2. In the second operation, migrate VMkernel interfaces. Ensure that PNIC only Migration is disabled.

    Both PNIC only migration and VMkernel interface migration are not supported at the same time across multiple hosts.

    Note: To migrate a management network NIC, configure its associated VMkernel network mapping and keep PNIC only Migration disabled. If you only migrate the management NIC, the host loses connectivity.

    For more information, see VMkernel Migration to an N-VDS Switch.

    Network Mappings for Install

    To migrate VMkernels to N-VDS switch during installation, map VMkernels to an existing logical switch. The NSX Manager migrates the VMkernel to the mapped logical switch on N-VDS.

    Caution: Ensure that the management NIC and management VMkernel interface are migrated to a logical switch that is connected to the same VLAN that the management NIC was connected to before migration. If vmnic <n> and VMkernel <n> are migrated to a different VLAN, then connectivity to the host is lost.
    Caution: For pinned physical NICs, ensure that the host switch mapping of physical NIC to a VMkernel interface matches the configuration specified in the transport node profile. As part of the validation procedure, NSX-T Data Center verifies the mapping and if the validation passes migration of VMkernel interfaces to an N-VDS switch is successful. It is also mandatory to configure the network mapping for uninstallation because NSX-T Data Center does not store the mapping configuration of the host switch after migrating the VMkernel interfaces to the N-VDS switch. If the mapping is not configured, connectivity to services, such as vSAN, can be lost after migrating back to the VSS or VDS switch.

    For more information, see VMkernel Migration to an N-VDS Switch.

    Network Mappings for Uninstall

    To revert the migration of VMkernels attached to an N-VDS switch during uninstallation, map VMkernels to port groups on VSS or DVS, so that NSX Manager knows which port group the VMkernel must be migrated back to on the VSS or DVS. For a DVS switch, ensure that the port group is of the type Ephemeral.

    To revert the migration of VMkernels attached to a NSX-T port group created on a vSphere Distributed Switch (VDS) during uninstallation, map VMkernels to port groups on VSS or DVS, so that NSX Manager knows which port group the VMkernel must be migrated back to on the VSS or DVS. For a DVS switch, ensure that the port group is of the type Ephemeral.

    Caution: For pinned physical NICs, ensure that the transport node profile mapping of physical NIC to VMkernel interface matches the configuration specified in the host switch. It is mandatory to configure the network mapping for uninstallation because NSX-T Data Center does not store the mapping configuration of the host switch after migrating the VMkernel interfaces to the N-VDS switch. If the mapping is not configured, connectivity to services, such as vSAN, can be lost after migrating back to the VSS or VDS switch.

    For more information, see VMkernel Migration to an N-VDS Switch.

  9. Select VDS as the host switch type and enter the switch details.
    Option Description
    Name

    (Hosts managed by a vSphere cluster) Select the vCenter Server that manages the host switch.

    Select the VDS that is created in vCenter Server.

    Transport Zones

    Shows the transport zones that are realized by the associated host switches. You cannot add a transport zone if it is not realized by any host switch.

    Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom uplink profile.
    Note: Ensure MTU value entered in the NSX-T Data Center uplink profile and VDS switch is set to at least 1600. If the MTU value in vCenter Server for the VDS switch is lower than the MTU value entered in the uplink profile, then NSX-T Data Center displays an error asking you to enter an appropriate MTU value in the vCenter Server.

    You can also use the default uplink profile.

    Note: Link Aggregation Groups defined in an uplink profile cannot be mapped to VDS uplinks.
    IP Assignment Select Use DHCP, Use IP Pool, or Use Static IP List to assign an IP address to virtual tunnel endpoints (VTEPs) of the transport node.

    If you select Use Static IP List, you must specify a list of comma-separated IP addresses, a gateway, and a subnet mask. All the VTEPs of the transport node must be in the same subnet otherwise bidirectional flow (BFD) session is not established.

    IP Pool If you selected Use IP Pool for an IP assignment, specify the IP pool name.
    Teaming Policy Switch Mapping

    Map the uplinks defined in the NSX-T uplink profile with the VDS switch uplinks. Alternatively, NSX-T uplinks can also be mapped to LAGs configured on the VDS switch.

    To configure or view the VDS switch uplinks, go to vCenter ServervSphere Distributed Switch. Click Actions → Settings → Edit Settings.

    Note: For a VDS switch, Uplinks/LAGs, NIOC profile, LLDP profile can be defined only in vSphere ESXi host. These configurations are not available in NSX Manager. In addition, in NSX Manager, you cannot configure networking mapping for install and uninstall if the host switch is a VDS switch. To manage VMkernel adapters on a VDS switch, go to vCenter Server to attach VMkernel adapters to Distributed Virtual port groups or NSX port groups.
  10. If you have selected multiple transport zones, click ADD SWITCH again to configure the switch for the other transport zones.
  11. Click Finish to complete the configuration.

What to do next

Apply the transport node profile to an existing vSphere cluster. See Configure a Managed Host Transport Node.