To prepare hosts to participate in NSX-T Data Center, you can manually install NSX-T Data Center kernel modules on RHEL, or CentOS Linux.

This allows you to build the NSX-T Data Center control-plane and management-plane fabric. NSX-T Data Center kernel modules packaged in RPM files run within the hypervisor kernel and provide services such as distributed routing, distributed firewall, and bridging capabilities.

You can download the NSX-T Data Center RPMs manually and make them part of the host image. Be aware that download paths can change for each release of NSX-T Data Center. Always check the NSX-T Data Center downloads page to get the appropriate RPMs.


Ability to reach a RHEL, or CentOS Linux repository.


  1. Log in to the host as an administrator.
  2. Download and copy the nsx-lcp file into the /tmp directory.
  3. Untar the package.
    tar -zxvf nsx-lcp-<release>-rhel7.4_x86_64.tar.gz
  4. Navigate to the package directory.
    cd nsx-lcp-rhel74_x86_64/
  5. Install the packages.
    sudo yum install *.rpm

    When you run the yum install command, any NSX-T Data Center dependencies are resolved, assuming the RHEL, or CentOS Linux can reach their respective repositories.

  6. Reload the OVS kernel module.
    /usr/share/openvswitch/scripts/ovs-systemd-reload force-reload-kmod

    If the hypervisor uses DHCP on OVS interfaces, restart the network interface on which DHCP is configured. You can manually stop the old dhclient process on the network interface and restart a new dhclient process on that interface.

  7. To verify, run the rpm -qa | egrep 'nsx|openvswitch' command.
    The installed packages in the output must match the packages in the nsx-rhel74, nsx-centos74, or nsxdirectory.

What to do next

Add the host to the NSX-T Data Center management plane. See Form an NSX Manager Cluster Using the CLI.