NSX Manager is installed as a virtual appliance on any ESXi host in your vCenter environment.

NSX Manager provides the graphical user interface (GUI) and the REST APIs for creating, configuring, and monitoring NSX components, such as controllers, logical switches, and edge services gateways. NSX Manager provides an aggregated system view and is the centralized network management component of NSX Data Center for vSphere. The NSX Manager virtual machine is packaged as an OVA file, which allows you to use the vSphere Web Client to import the NSX Manager into the datastore and virtual machine inventory.

For high availability, deploy NSX Manager in a cluster configured with HA and DRS. Optionally, you can install the NSX Manager in a different vCenter than the one that the NSX Manager will be interoperating with. A single NSX Manager serves a single vCenter Server environment.

In cross-vCenter NSX installations, make sure that each NSX Manager has a unique UUID. NSX Manager instances deployed from OVA files have unique UUIDs. An NSX Manager deployed from a template (as in when you convert a virtual machine to a template) has the same UUID as the original NSX Manager used to create the template. In other words, for each NSX Manager, you should install a new appliance as outlined in this procedure.

The NSX Manager virtual machine installation includes VMware Tools. Do not attempt to upgrade or install VMware Tools on the NSX Manager.

During the installation, you can choose to join the Customer Experience Improvement Program (CEIP) for NSX Data Center for vSphere. See Customer Experience Improvement Program in the NSX Administration Guide for more information about the program, including how to join or leave the program.

Prerequisites

  • Download the appropriate version of the OVA file from https://www.vmware.com/go/download-nsx.
  • Before installing NSX Manager, make sure that the required ports are open. See Ports and Protocols Required by NSX Data Center for vSphere.
  • Make sure that a datastore is configured and accessible on the target ESXi host. Shared storage is recommended. HA requires shared storage, so that the NSX Manager appliance can be restarted on another host if the original host fails.
  • Make sure that you know the IP address and gateway, DNS server IP addresses, domain search list, and the NTP server IP address that the NSX Manager will use.
  • Decide whether NSX Manager will have IPv4 addressing only, IPv6 addressing only, or dual-stack network configuration. The host name of the NSX Manager is used by other entities, so the host name must be mapped to the right IP address in the DNS servers used in that network.
  • Prepare a management traffic distributed port group on which NSX Manager will communicate. See Example: Working with a vSphere Distributed Switch. The NSX Manager management interface, vCenter Server, and ESXi host management interfaces must be reachable by Guest Introspection instances.
  • The Client Integration Plug-in must be installed. The Deploy OVF template wizard works best in the Firefox web browser. Sometimes in the Chrome web browser, an error message about installing the Client Integration Plug-in is displayed even though the plug-in is already successfully installed. To install the Client Integration Plug-in:
    1. Open a Web browser and enter the URL for the vSphere Web Client.

    2. At the bottom of the vSphere Web Client login page, click Download Client Integration Plug-in.

      If the Client Integration Plug-In is already installed on your system, the link to download the plug-in is not displayed. If you uninstall the Client Integration Plug-In, the link to download it displays on the vSphere Web Client login page.

Procedure

  1. Locate the NSX Manager Open Virtualization Appliance (OVA) file.

    Either copy the download URL or download the OVA file onto your computer.

  2. Log in to the vSphere Web Client, and navigate to VMs and Templates.
  3. Right-click any inventory object, which is a valid object of a virtual machine, such as a data center, folder, cluster, resource pool, or host, and select Deploy OVF Template.
    The Deploy OVF Template wizard opens.
  4. On the Select template page, either paste the download URL or click Browse to select the OVA or OVF template on your computer. Ensure that you select all the files associated with an OVF template. This includes files such as .ovf, .vmdk, and so on. If you do not select all the required files, a warning message displays.
    Note: If the installation fails with an Operation timed out error, check if the storage and network devices have any connectivity issues. This problem occurs when there is a problem with the physical infrastructure such as loss of connectivity to the storage device or a connectivity issue with physical NIC or switch.
  5. On the Select a name and folder page, enter a unique name for the NSX Manager virtual appliance, and select a deployment location.
    The default name for the virtual appliance is the same as the name of the selected OVF or OVA template. If you change the default name, choose a name that is unique within each vCenter Server virtual machine folder. The default deployment location for the virtual appliance is the inventory object where you started the wizard.
  6. On the Select a resource page, select a host or a cluster where you want to deploy the NSX Manager virtual appliance.
    For example, you can deploy the NSX Manager virtual appliance in the Management cluster where you usually install all the management and Edge components.
  7. On the Review details page, verify the OVF or OVA template details.
  8. Accept the VMware license agreements.
  9. On the Select storage page, define where and how to store the files for the deployed OVF or OVA template.
    1. Select the disk format for the virtual machine virtual disks.
      Option Description
      Thick Provision Lazy Zeroed

      Creates a virtual disk in a default thick format. Space required for the virtual disk is allocated when the virtual disk is created. Data remaining on the physical device is not erased during creation, but is zeroed out later, on demand, on first write from the virtual machine.

      Thick Provision Eager Zeroed

      A type of thick virtual disk that supports clustering features such as Fault tolerance. Space required for the virtual disk is allocated at creation time. In contrast to the flat format, the data remaining on the physical device is zeroed out when the virtual disk is created. It might take much longer to create disks in this format than to create other types of disks.

      Thin Provision

      Use this format to save storage space. For the thin disk, you provision as much datastore space as the disk requires based on the value that you enter for the disk size. However, the thin disk starts small and at first, uses only as much datastore space as the disk needs for its initial operations.

      For detailed information about thick provisioning and thin provisioning storage models in vSphere, see the vSphere Storage documentation.
      Tip: For optimal performance, you should reserve memory for the NSX Manager virtual appliance. A memory reservation is a guaranteed lower bound on the amount of physical memory that the host reserves for a virtual machine, even when memory is overcommitted. Set the reservation to a level that ensures NSX Manager has sufficient memory to run efficiently.
    2. Select a VM Storage Policy.
      This option is available only when storage policies are enabled on the destination resource.
    3. (Optional) Click the Show datastores from Storage DRS clusters check box to choose individual datastores from Storage DRS clusters for the initial placement of this virtual appliance.
    4. Select a datastore to store the deployed OVF or OVA template.
      The configuration file and virtual disk files are stored on the datastore. Select a datastore that is large enough to accommodate this virtual appliance and all associated virtual disk files.
  10. On the Select networks page, select a source network and map it to a destination network.

    For example, you can map the source network to the distributed port group that you prepared for management traffic.

    The Source Network column lists all networks that are defined in the OVF or OVA template.

  11. On the Customize template page, specify the CLI admin user password and the CLI privilege mode password that you can use to customize the deployment properties of this NSX Manager virtual appliance.
  12. On the Ready to complete page, review the page and click Finish.

Results

Open the console of the NSX Manager to track the boot process.

After the NSX Manager is booted, log in to the CLI and run the show interface command to verify that the IP address was applied as expected.

nsxmgr1> show interface
Interface mgmt is up, line protocol is up
  index 3 metric 1 mtu 1500 <UP,BROADCAST,RUNNING,MULTICAST>
  HWaddr: 00:50:56:8e:c7:fa
  inet 192.168.110.42/24 broadcast 192.168.110.255
  inet6 fe80::250:56ff:fe8e:c7fa/64
  Full-duplex, 0Mb/s
    input packets 1370858, bytes 389455808, dropped 50, multicast packets 0
    input errors 0, length 0, overrun 0, CRC 0, frame 0, fifo 0, missed 0
    output packets 1309779, bytes 2205704550, dropped 0
    output errors 0, aborted 0, carrier 0, fifo 0, heartbeat 0, window 0
    collisions 0

Make sure that the NSX Manager can ping its default gateway, its NTP server, the vCenter Server, and the IP address of the management interface on all hypervisor hosts that it will manage.

Connect to the NSX Manager appliance GUI by opening a web browser and navigating to the NSX Manager IP address or hostname.

After logging in as admin with the password you set during installation, from the Home page click View Summary and make sure that the following services are running:
  • vPostgres
  • RabbitMQ
  • NSX Management Services

What to do next

Register the vCenter Server with the NSX Manager.