NSX Manager provides a graphical user interface (GUI) and REST APIs for creating, configuring, and monitoring NSX components such as logical switches, logical routers, and firewalls.

NSX Manager provides a system view and is the management component of NSX.

For high availability, NSX supports a management cluster of three NSX Managers. For a production environment, deploying a management cluster is recommended. Starting with NSX 3.1, a single NSX Manager cluster deployment is supported.

In a vSphere environment, the following functions are supported by NSX Manager:
  • vCenter Server can use the vMotion function to live migrate NSX Manager across hosts and clusters.
  • vCenter Server can use the Storage vMotion function to live migrate file system of an NSX Manager across hosts and clusters.
  • vCenter Server can use the Distributed Resource Scheduler function to rebalance NSX Manager across hosts and clusters.
  • vCenter Server can use the Anti-affinity function to manage NSX Manager across hosts and clusters.

NSX Manager Deployment, Platform, and Installation Requirements

The following table details the NSX Manager deployment, platform, and installation requirements

Requirements Description
Supported deployment methods
  • OVA/OVF
Supported platforms

See NSX Manager VM and Host Transport Node System Requirements.

On ESXi, it is recommended that the NSX Manager appliance be installed on shared storage.

IP address

An NSX Manager must have a static IP address. You can change the IP address after installation. Both IPv4 and IPv6 are supported. You can choose IPv4 only or use dual stack (both IPv4 and IPv6).

Note: If you choose to use one IPv4 only, then the NSX Manager services (for example, SNMP, NTP, vIDM, etc.) must have IPv4 addresses.
NSX appliance password
  • At least 12 characters
  • At least one lower-case letter
  • At least one upper-case letter
  • At least one digit
  • At least one special character
  • At least five different characters
  • Default password complexity rules are enforced by the following Linux PAM module arguments:
    • retry=3: The maximum number of times a new password can be entered, for this argument at the most 3 times, before returning with an error.
    • minlen=12: The minimum acceptable size for the new password. In addition to the number of characters in the new password, credit (of +1 in length) is given for each different kind of character (other, upper, lower and digit).
    • difok=0: The minimum number of bytes that must be different in the new password. Indicates similarity between the old and new password. With a value 0 assigned to difok, there is no requirement for any byte of the old and new password to be different. An exact match is allowed.
    • lcredit=1: The maximum credit for having lower case letters in the new password. If you have less than or 1 lower case letter, each letter will count +1 towards meeting the current minlen value.
    • ucredit=1: The maximum credit for having upper case letters in the new password. If you have less than or 1 upper case letter each letter will count +1 towards meeting the current minlen value.
    • dcredit=1: The maximum credit for having digits in the new password. If you have less than or 1 digit, each digit will count +1 towards meeting the current minlen value.
    • ocredit=1: The maximum credit for having other characters in the new password. If you have less than or 1 other characters, each character will count +1 towards meeting the current minlen value.
    • enforce_for_root: The password is set for the root user.
    Note: For more details on Linux PAM module to check the password against dictionary words, refer to the man page.

    For example, avoid simple and systematic passwords such as VMware123!123 or VMware12345. Passwords that meet complexity standards are not simple and systematic but are a combination of letters, alphabets, special characters, and numbers, such as VMware123!45, VMware 1!2345 or VMware@1az23x.

Hostname When installing NSX Manager, specify a hostname that does not contain invalid characters such as an underscore or special characters such as dot ".". If the hostname contains any invalid character or special characters, after deployment the hostname will be set to nsx-manager.

For more information about hostname restrictions, see https://tools.ietf.org/html/rfc952 and https://tools.ietf.org/html/rfc1123.

VMware Tools The NSX Manager VM running on ESXi has VMTools installed. Do not remove or upgrade VMTools.
System
  • Verify that the system requirements are met. See System Requirements.
  • Verify that the required ports are open. See Ports and Protocols.
  • Verify that a datastore is configured and accessible on the ESXi host.
  • Verify that you have the IP address and gateway, DNS server IP addresses, domain search list, and the NTP Server IP or FQDN for the NSX Manager to use.
  • Create a management VDS and target VM port group in vCenter. Place the NSX appliances onto this management VDS port group network. See Prepare a vSphere Distributed Switch for NSX.
    Multiple management networks can be used as long as the NSX Manager nodes has consistent connectivity and recommended latency between them.
    Note: If you plan to use Cluster VIP, all NSX Manager appliances should belong to same subnet.
  • Plan your NSX Manager IP and NSX Manager Cluster VIP addressing scheme.
    Note: Verify that you have the hostname for NSX Manager to use. The Hostname format must be [email protected]. This format is required if NSX installation is dual stack (IPv4, IPv6) and/or if planning to configure CA-signed certificates.
OVF Privileges

Verify that you have adequate privileges to deploy an OVF template on the ESXi host.

A management tool that can deploy OVF templates, such as VMware vCenter or the vSphere Client. The OVF deployment tool must support configuration options to allow for manual configuration.

OVF tool version must be 4.0 or later.

Client Plug-in

The Client Integration Plug-in must be installed.

Certificates

If you plan to configure internal VIP on a NSX Manager cluster, you can apply a different certificate to each NSX Manager node of the cluster. See Configure a Virtual IP Address for a Cluster.

If you plan to configure an external load balancer, ensure only a single certificate is applied to all NSX Manager cluster nodes. See Configuring an External Load Balancer.

Note: On an NSX Manager fresh install, reboot, or after an admin password change when prompted on first login, it might take several minutes for the NSX Manager to start.

NSX Manager Installation Scenarios

Important: When you install NSX Manager from an OVA or OVF file, either from vSphere Client or the command line as a standalone host, OVA/OVF property values such as user names and passwords are not validated before the VM is powered on. However, the static IP address field is a mandatory field to install NSX Manager. When you install NSX Manager as a managed host in VMware vCenter, OVA/OVF property values such as user names and passwords are validated before the VM is powered on.
  • If you specify a user name for any local user, the name must be unique. If you specify the same name, it is ignored and the default names (for example, admin and audit) are used.
  • If the password for the root or admin user does not meet the complexity requirements, you must log in to NSX Manager through SSH or at the console as root with password vmware and admin with password default. You are prompted to change the password.
  • If the password for other local users (for example, audit) does not meet the complexity requirements, the user account is disabled. To enable the account, log in to NSX Manager through SSH or at the console as the admin user and run the command set user local_user_name to set the local user's password (the current password is an empty string). You can also reset passwords in the UI using System > User Management > Local Users.
Caution: Changes made to the NSX while logged in with the root user credentials might cause system failure and potentially impact your network. You can only make changes using the root user credentials with the guidance of VMware Support team.
Note: The core services on the appliance do not start until a password with sufficient complexity is set.

After you deploy NSX Manager from an OVA file, you cannot change the VM's IP settings by powering off the VM and modifying the OVA settings from VMware vCenter.

Configuring NSX Manager for Access by the DNS Server

By default, transport nodes access NSX Managers based on their IP addresses. However, this can be based also on the DNS names of the NSX Managers.

You enable FQDN usage by publishing the FQDNs of the NSX Managers.

Note: Enabling FQDN usage (DNS) on NSX Managers is required for multisite deployments. (It is optional for all other deployment types.) See Multisite Deployment of NSX in the NSX Administration Guide.

Publishing the FQDNs of the NSX Managers

  • Go to DNS server and configure forward and reverse lookup entries for NSX Manager nodes. In lookup entries, configure short TTL for FQDNs, for example, set short TTL to 600 seconds.
  • Use NSX Manager API to enable DNS server to access the NSX Manager node.

Example request: PUT https://<nsx-mgr>/api/v1/configs/management

{
  "publish_fqdns": true,
  "_revision": 0
}

Example response:

{
  "publish_fqdns": true,
  "_revision": 1
}

See the NSX API Guide for details.

Note: After publishing the FQDNs, validate access by the transport nodes as described in the next section.

Validating Access via FQDN by Transport Nodes

After publishing the FQDNs of the NSX Managers, verify that the transport nodes are successfully accessing the NSX Managers.

Using SSH, log into a transport node such as a hypervisor or Edge node, and run the get controllers CLI command.

Example response:
Controller IP    Port  SSL     Status       Is Physical Master   Session State    Controller FQDN
192.168.60.5    1235  enabled  connected   true                  up               nsxmgr.corp.com