You can configure NSX on indivdual ESXi hosts

Prerequisites

  • Verify that the individual host you want to prepare is powered on.
  • Verify that the system requirements are met. See System Requirements.
  • The reverse proxy service on all nodes of the NSX Manager cluster must be Up and running.

    To verify, run get service http. If the service is down, restart the service by running restart service http on each NSX Manager node. If the service is still down, contact VMware support.

  • If you deployed VMware vCenter on a custom port or a non-default port, apply these rules to NSX Manager:
    • IPv4 rules must be applied on NSX Manager manually before starting the host preparation.
      • iptables -A INPUT -p tcp -m tcp --dport <CUSTOM_PORT> --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
      • iptables -A OUTPUT -p tcp -m tcp --dport <CUSTOM_PORT> --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
    • IPv6 table rules must be applied on NSX Manager manually before starting the host preparation.
      • ip6tables -A OUTPUT -o eth0 -p tcp -m tcp --dport <CUSTOM_PORT> --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
      • ip6tables -A INPUT -p tcp -m tcp --dport <CUSTOM_PORT> --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
  • (Host in lockdown mode) If your exception list for vSphere lockdown mode includes expired user accounts such as lldpvim-user, NSX installation on vSphere fails. This user automatically gets created on ESXi to talk to hostd to get the LLDP neighbor information and then gets deleted. Ensure that you delete all expired user accounts before you begin installation. For more information on accounts with access privileges in lockdown mode, see Specifying Accounts with Access Privileges in Lockdown Mode in the vSphere Security Guide.

Procedure

  1. Retrieve the hypervisor thumbprint so that you can provide it when adding the host to the fabric.
    1. Gather the hypervisor thumbprint information.
      Use a Linux shell.
      # echo -n | openssl s_client -connect <esxi-ip-address>:443 2>/dev/null | openssl x509 -noout -fingerprint -sha256
      
      Use the ESXi CLI in the host.
      [root@host:~] openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout
      SHA256 Fingerprint=49:73:F9:A6:0B:EA:51:2A:15:57:90:DE:C0:89:CA:7F:46:8E:30:15:CA:4D:5C:95:28:0A:9E:A2:4E:3C:C4:F4
  2. From a browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-ip-address> or https://<nsx-manager-fqdn>.
  3. Select System > Fabric > Hosts.
  4. Select Other Nodes and select a host.
  5. On the Host Details page, enter details for the following fields.
    Option Description
    Name and Enter the name to identify the standalone host.
    IP Addresses Enter the host IP address.
    Description You can optionally add the description of the operating system used for the host.
    Tags Enter a tag that you want to associate with the host. A tag can be used when you want to group all hosts having a certain OS version, ESXi version, and so on.
  6. Click Next.
  7. On the Prepare Host tab, click Add Host Switch.
  8. From the Select VDS drop-down menu, select a VDS switch.
  9. Configure the following fields:
    Option Description
    Name

    (Hosts managed by a vSphere cluster) Select the VMware vCenter that manages the host switch.

    Select the VDS that is created in VMware vCenter and attached to your ESXi hosts.

    Transport Zones

    In the Show section, select Overlay, VLAN or All to view and select the type of transport zones you want for the host switch.

    These transport zones are realized by associated host switches.

    Supported transport zone configurations:
    • You can add multiple VLAN transport zones per host switch.
    • You must add only one overlay transport zone per host switch. NSX Manager UI does not allow adding multiple overlay transport zones.
    Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom uplink profile. You can also use the default uplink profile.

    If you keep the MTU value empty, NSX takes the global default MTU value 1700. If you enter an MTU value in the NSX uplink profile, that MTU value will override the global default MTU value.

    Note: Link Aggregation Groups defined in an uplink profile cannot be mapped to VDS uplinks.
    IP Address Type (TEP) Select between IPv4 and IPv6 to specify the IP version for the tunnel endpoints (TEPs) of the transport node.
    IPv4 Assignment

    Choose how IPv4 addresses are assigned to the TEPs. The options are:

    • Use DHCP: IPv4 addresses are assigned from a DHCP server.
    • Use IPv4 Pool: IPv4 addresses are assigned from an IP pool. Specify the IPv4 pool name to be used for TEPs.
    IPv6 Assignment

    Choose how IPv6 addresses are assigned to the TEPs. The options are:

    • Use DHCPv6: IPv6 addresses are assigned from a DHCP server.
    • Use IPv6 Pool: IPv6 addresses are assigned from an IP pool. Specify the IPv6 pool name to be used for TEPs.
    • Use AutoConf: IPv6 addresses are assigned from Router Advertisement (RA).

    Advanced Configuration

    Mode

    Choose between the following mode options:
    • Standard: This mode applies to all transport nodes. The data-plane in the transport node automatically selects the host switch mode as per the uplink capabilities.
    • Enhanced Datapath - Standard: This mode is a variant of the Enhanced Data Path mode. It is available only on ESXi hypervisor 7.0 and later versions. Please consult your account representative for applicability.
    • Enhanced Datapath - Performance: This is the Enhanced Data Path switch mode for ESXi host transport node. This mode provides accelerated networking performance. It requires nodes to use VMXNET3 vNIC enabled network cards. This mode is only supported on ESXi v6.7 and later versions (recommended v6.7 U2). It is not supported on NSX Edge nodes and Public Cloud Gateways. Note that not all NSX features are available in this mode.
    • Legacy: This mode was formerly called Standard and is not displayed on the NSX Manager UI. You can select this mode only through an API, since the Legacy field is read-only in NSX Manager UI. Only when you select this mode through API, the NSX Manager UI displays the Legacy field set to 'Yes' along with the Mode field set to Standard. This mode applies to all transport nodes. When the host switch mode is set to Legacy, the packet handler stack is enabled.
      Note that Enhanced Datapath modes (Standard and Performance) require a higher tier of NSX licenses:
      • Enhanced Datapath modes with classic NIC: NSX advanced or higher.
      • Enhanced Datapath - Performance mode with SmartNIC: NSX Enterprise Plus or higher.
      You can run the following Host Transport Node or Transport Node Profile policy API to set the host switch mode to Legacy:
      • Create or update Host Transport Node:
        PUT https://<NSX-Manager-IP-ADDRESS>/policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>
      • Create or update policy Host Transport Node Profile:
        PUT https://<NSX-Manager-IP-ADDRESS>/policy/api/v1/infra/host-transport-node-profiles/<transport-node-profile-id>

    You can also change the mode of a host node without removing the node from the system unlike earlier versions of NSX where you had to remove the node and then add it back again with a new mode.

    The mode changes that NSX supports are:

    • Standard -> Enhanced Datapath - Standard
    • Enhanced Datapath - Standard -> Standard
    • Enhanced Datapath - Performance -> Enhanced Datapath - Standard

    The mode changes that NSX does not support are:

    • Enhanced Datapath - Performance -> Standard
    • Enhanced Datapath - Standard -> Enhanced Datapath - Performance
    • Standard -> Enhanced Datapath - Performance

    You can use the following API to change the mode of the host switch.

    POST https://{{nsxmanager-ip}}/policy/api/v1/infra/sites/default/enforcement-points/default/host-transport-nodes/<transport-node-profile-id>

    CPU Config
    You can configure the CPU Config field only when the Mode is set to Enhanced Datapath.
    1. Click Set.
    2. In the CPU Config window, click Add.
    3. Enter values for the NUMA Node Index and LCores per NUMA Node fields.
    4. To save the values, click Add and Save.
    Teaming Policy Uplink Mapping

    Before you map uplinks in NSX with uplinks in VDS, ensure uplinks are configured on the VDS switch. To configure or view the VDS switch uplinks, go to VMware vCentervSphere Distributed Switch. Click Actions → Settings → Edit Settings.

    Map uplinks defined in the selected NSX uplink profile with VDS uplinks. The number of NSX uplinks that are presented for mapping depends on the uplink profile configuration.

    For example, in the upink-1 (active) row, go to the Physical NICs column, click the edit icon, and type in the name of VDS uplink to complete mapping it with uplink-1 (active). Likewise, complete mapping for the other uplinks.

    Note: Uplinks/LAGs, NIOC profile, LLDP profile are defined in VMware vCenter. These configurations are not available in NSX Manager. To manage VMkernel adapters on a VDS switch, go to VMware vCenter to attach VMkernel adapters to Distributed Virtual port groups or NSX port groups.
  10. If you have selected multiple transport zones, you can add them to the same switch. To add a new switch, click Add Switch again to configure a new switch for the other transport zones.
    NSX switches can attach to a single overlay transport zone and multilpe VLAN transport zones at the same time.

  11. Click Add to complete the configuration.
  12. (Optional) View the ESXi connection status.
    # esxcli network ip connection list | grep 1235
    tcp   0   0  192.168.210.53:20514  192.168.110.34:1234   ESTABLISHED  1000144459  newreno  nsx-proxy
    
  13. On the Other Nodes tab, verify that the NSX Manager connectivity status of hosts in the cluster is Up and NSX configuration state is Success. During the configuration process, each transport node displays the percentage of progress of the installation process. If installation fails at any stage, you can restart the process by clicking the Resolve link that is available against the failed stage of the process.
    You can also see that the transport zone is applied to the hosts in the cluster.
    Note: If you again configure a host that is part of a cluster that is already prepared by a transport node profile, the configuration state of a node is in Configuration Mismatch state.
    Note: The Other Nodes tab displays TEP addresses of the host in addition to IP addresses. TEP address is the address assigned to the VMkernel NIC of the host, whereas IP address is the management IP address.
  14. (Optional) Remove an NSX VIBs on the host.
    1. Select one or more hosts and click Actions > Remove NSX.
    The uninstallation takes up to three minutes. Uninstallation of NSX removes the transport node configuration on hosts and the host is detached from the transport zone(s) and switch. Similar to the installation process, you can follow the percentage of the uninstallation process completed on each transport node. If uninstallation fails at any stage, you can restart the process by clicking the Resolve link that is available against the failed stage of the process.
  15. (Optional) Remove a transport node from the transport zone.
    1. Select a single transport node and click Actions > Remove from Transport Zone.

What to do next

When the hosts are transport nodes, you can create transport zones, logical switches, logical routers, and other network components through the NSX Manager UI or API at any time. When NSX Edge nodes and hosts join the management plane, the NSX logical entities and configuration state are pushed to the NSX Edge nodes and hosts automatically. You can create transport zones, logical switches, logical routers, and other network components through the NSX Manager UI or API at any time. When the hosts are transport nodes, these entities gets realized on the host.

Create a logical switch and assign logical ports. See the Advanced Switching section in the NSX Administration Guide.