NSX Manager provides a graphical user interface (GUI) and REST APIs for creating, configuring, and monitoring NSX-T Data Center components such as logical switches, logical routers, and firewalls.

NSX Manager provides a system view and is the management component of NSX-T Data Center.

For high availability, NSX-T Data Center supports a management cluster of three NSX Managers. For a production environment, deploying a management cluster is recommended. Starting with NSX-T Data Center 3.1, a single NSX Manager cluster deployment is supported.

In a vSphere environment, the following functions are supported by NSX Manager:
  • vCenter Server can use the vMotion function to live migrate NSX Manager across hosts and clusters.
  • vCenter Server can use the Storage vMotion function to live migrate file system of an NSX Manager across hosts and clusters.
  • vCenter Server can use the Distributed Resource Scheduler function to rebalance NSX Manager across hosts and clusters.
  • vCenter Server can use the Anti-affinity function to manage NSX Manager across hosts and clusters.

NSX Manager Deployment, Platform, and Installation Requirements

The following table details the NSX Manager deployment, platform, and installation requirements

Requirements Description
Supported deployment methods
  • OVA/OVF
  • QCOW2
Supported platforms

See NSX Manager VM and Host Transport Node System Requirements.

On ESXi, it is recommended that the NSX Manager appliance be installed on shared storage.

IP address An NSX Manager must have a static IP address. You can change the IP address after installation. Only IPv4 addresses are supported.
NSX-T Data Center appliance password
  • At least 12 characters
  • At least one lower-case letter
  • At least one upper-case letter
  • At least one digit
  • At least one special character
  • At least five different characters
  • Default password complexity rules are enforced by the following Linux PAM module arguments:
    • retry=3: The maximum number of times a new password can be entered, for this argument at the most 3 times, before returning with an error.
    • minlen=12: The minimum acceptable size for the new password. In addition to the number of characters in the new password, credit (of +1 in length) is given for each different kind of character (other, upper, lower and digit).
    • difok=0: The minimum number of bytes that must be different in the new password. Indicates similarity between the old and new password. With a value 0 assigned to difok, there is no requirement for any byte of the old and new password to be different. An exact match is allowed.
    • lcredit=1: The maximum credit for having lower case letters in the new password. If you have less than or 1 lower case letter, each letter will count +1 towards meeting the current minlen value.
    • ucredit=1: The maximum credit for having upper case letters in the new password. If you have less than or 1 upper case letter each letter will count +1 towards meeting the current minlen value.
    • dcredit=1: The maximum credit for having digits in the new password. If you have less than or 1 digit, each digit will count +1 towards meeting the current minlen value.
    • ocredit=1: The maximum credit for having other characters in the new password. If you have less than or 1 other characters, each character will count +1 towards meeting the current minlen value.
    • enforce_for_root: The password is set for the root user.
    Note: For more details on Linux PAM module to check the password against dictionary words, refer to the man page.

    For example, avoid simple and systematic passwords such as VMware123!123 or VMware12345. Passwords that meet complexity standards are not simple and systematic but are a combination of letters, alpahabets, special characters, and numbers, such as VMware123!45, VMware 1!2345 or VMware@1az23x.

Hostname When installing NSX Manager, specify a hostname that does not contain invalid characters such as an underscore or special characters such as dot ".". If the hostname contains any invalid character or special characters, after deployment the hostname will be set to nsx-manager.

For more information about hostname restrictions, see https://tools.ietf.org/html/rfc952 and https://tools.ietf.org/html/rfc1123.

VMware Tools The NSX Manager VM running on ESXi has VMTools installed. Do not remove or upgrade VMTools.
System
  • Verify that the system requirements are met. See System Requirements.
  • Verify that the required ports are open. See Ports and Protocols.
  • Verify that a datastore is configured and accessible on the ESXi host.
  • Verify that you have the IP address and gateway, DNS server IP addresses, domain search list, and the NTP Server IP or FQDN list for the NSX Manager or Cloud Service Manager to use.
  • If you do not already have one, create the target VM port group network. Place the NSX-T Data Center appliances on a management VM network.

    If you have multiple management networks, you can add static routes to the other networks from the NSX-T Data Center appliance.

  • Plan your NSX Manager IPv4 IP addressing scheme.
OVF Privileges

Verify that you have adequate privileges to deploy an OVF template on the ESXi host.

A management tool that can deploy OVF templates, such as vCenter Server or the vSphere Client. The OVF deployment tool must support configuration options to allow for manual configuration.

OVF tool version must be 4.0 or later.

Client Plug-in

The Client Integration Plug-in must be installed.

Certificates

If you plan to configure internal VIP on a NSX Manager cluster, you can apply a different certificate to each NSX Manager node of the cluster. See Configure a Virtual IP Address for a Cluster.

If you plan to configure an external load balancer, ensure only a single certificate is applied to all NSX Manager cluster nodes. See Configuring an External Load Balancer.

Note: On an NSX Manager fresh install, reboot, or after an admin password change when prompted on first login, it might take several minutes for the NSX Manager to start.

NSX Manager Installation Scenarios

Important: When you install NSX Manager from an OVA or OVF file, either from vSphere Client or the command line as a standalone host, OVA/OVF property values such as user names and passwords are not validated before the VM is powered on. However, the static IP address field is a mandatory field to install NSX Manager. When you install NSX Manager as a managed host in vCenter Server, OVA/OVF property values such as user names and passwords are validated before the VM is powered on.
  • If you specify a user name for any local user, the name must be unique. If you specify the same name, it is ignored and the default names (for example, admin and audit) are used.
  • If the password for the root or admin user does not meet the complexity requirements, you must log in to NSX Manager through SSH or at the console as root with password vmware and admin with password default. You are prompted to change the password.
  • If the password for other local users (for example, audit) does not meet the complexity requirements, the user account is disabled. To enable the account, log in to NSX Manager through SSH or at the console as the admin user and run the command set user local_user_name to set the local user's password (the current password is an empty string). You can also reset passwords in the UI using System > User Management > Local Users.
Caution: Changes made to the NSX-T Data Center while logged in with the root user credentials might cause system failure and potentially impact your network. You can only make changes using the root user credentials with the guidance of VMware Support team.
Note: The core services on the appliance do not start until a password with sufficient complexity is set.

After you deploy NSX Manager from an OVA file, you cannot change the VM's IP settings by powering off the VM and modifying the OVA settings from vCenter Server.

Configuring NSX Manager for Access by the DNS Server

By default, transport nodes access NSX Managers based on their IP addresses. However, this can be based also on the DNS names of the NSX Managers.

You enable FQDN usage by publishing the FQDNs of the NSX Managers.

Note: Enabling FQDN usage (DNS) on NSX Managers is required for multisite Lite deployments. (It is optional for all other deployment types.) See Multisite Deployment of NSX-T Data Center in the NSX-T Data Center Administration Guide.

Publishing the FQDNs of the NSX Managers

After installing the NSX-T Data Center core components, to enable NAT using FQDN, you must set up the forward and reverse lookup entries for the manager nodes on the DNS server.

Important: It is highly recommended that you configure both the forward and reverse lookup entries for the NSX Managers' FQDN with a short TTL, for example, 600 seconds.

In addition, you must also enable publishing the NSX Manager FQDNs using the NSX-T API.

Example request: PUT https://<nsx-mgr>/api/v1/configs/management

{
  "publish_fqdns": true,
  "_revision": 0
}

Example response:

{
  "publish_fqdns": true,
  "_revision": 1
}

See the NSX-T Data Center API Guide for details.

Note: After publishing the FQDNs, validate access by the transport nodes as described in the next section.

Validating Access via FQDN by Transport Nodes

After publishing the FQDNs of the NSX Managers, verify that the transport nodes are successfully accessing the NSX Managers.

Using SSH, log into a transport node such as a hypervisor or Edge node, and run the get controllers CLI command.

Example response:
Controller IP    Port  SSL     Status       Is Physical Master   Session State    Controller FQDN
192.168.60.5    1235  enabled  connected   true                  up               nsxmgr.corp.com