Install NSX Manager on a KVM host running on a bare metal server. Do not install NSX Manager on a KVM host running as a virtual appliance on another host (nested environment).
The same QCOW2 file can be used to deploy three different types of appliances: NSX Manager, NSX Cloud Service Manager for NSX Cloud, and Global Manager for Federation.
- Verify that KVM is set up. See Set Up KVM.
- Verify that you have privileges to deploy a QCOW2 image on the KVM host.
- Verify that the password in the guestinfo adheres to the password complexity requirements so that you can log in after installation. See NSX Manager Installation.
- Familiarize yourself with the NSX Manager resource requirements. See NSX Manager VM and Host Transport Node System Requirements.
- If you plan to install Ubuntu OS, it is recommended to install Ubuntu version 18.04 before installing NSX Manager on the KVM host.
- If you are deploying an NSX Manager on a KVM v18.04 for a production environment, ensure that the KVM host is not running as a virtual machine on an ESXi host. However, if you want to deploy an NSX Manager in a nested KVM environment for purposes of proof-of-concept, deploy the NSX Manager in the QEMU user space, by using virt-type qemu.
- Do not deploy NSX Manager on a single disk. If you install NSX Manager on a single disk, some startup services might fail to come up.
- Download NSX Manager QCOW2 images (for primary and secondary disk) from My VMware: https://www.vmware.com/go/download-nsx-t.
Select the version to download and click Go to Downloads. Download the QCOW2 files.
- Make three copies of the images to the KVM machine that is going to run the NSX Manager using SCP or sync.
- (Ubuntu only) Add the currently logged in user as a libvirtd user:
adduser $USER libvirtd
- In the same directory where you saved the QCOW2 image, create three files (name: guestinfo.xml) for the primary disk image and populate it with the NSX Manager VM's properties. You do not need to create any files for the secondary disk image.
Your passwords must comply with the password strength restrictions.
- At least 12 characters
- At least one lower-case letter
- At least one upper-case letter
- At least one digit
- At least one special character
- At least five different characters
- Default password complexity rules are enforced by the following Linux PAM module arguments:
Note: For more details on Linux PAM module to check the password against dictionary words, refer to the man page.
retry=3: The maximum number of times a new password can be entered, for this argument at the most 3 times, before returning with an error.
minlen=12: The minimum acceptable size for the new password. In addition to the number of characters in the new password, credit (of +1 in length) is given for each different kind of character (other, upper, lower and digit).
difok=0: The minimum number of bytes that must be different in the new password. Indicates similarity between the old and new password. With a value 0 assigned to
difok, there is no requirement for any byte of the old and new password to be different. An exact match is allowed.
lcredit=1: The maximum credit for having lower case letters in the new password. If you have less than or 1 lower case letter, each letter will count +1 towards meeting the current
ucredit=1: The maximum credit for having upper case letters in the new password. If you have less than or 1 upper case letter each letter will count +1 towards meeting the current
dcredit=1: The maximum credit for having digits in the new password. If you have less than or 1 digit, each digit will count +1 towards meeting the current
ocredit=1: The maximum credit for having other characters in the new password. If you have less than or 1 other characters, each character will count +1 towards meeting the current minlen value.
enforce_for_root: The password is set for the root user.
For example, avoid simple and systematic passwords such as
VMware12345. Passwords that meet complexity standards are not simple and systematic but are a combination of letters, alpahabets, special characters, and numbers, such as
nsx_hostname Enter the host name for the NSX Manager. The host name must be a valid domain name. Ensure that each part of the host name (domain/subdomain) that is separated by dot must start with an alphabet character. nsx_role
- To install an NSX Manager appliance, select the NSX Manager role.
- To install a Global Manager appliance for a Federation deployment, select the NSX Global Manager role.
See Getting Started with Federation for details.
- To install a Cloud Service Manager (CSM) appliance for an NSX Cloud deployment, select the nsx-cloud-service-manager role.
See Overview of Deploying NSX Cloud for details.
You can enable or disable this property. If enabled, you can log in to the NSX Manager using SSH.
You can enable or disable this property. If enabled, you can log in to the NSX Manager using SSH as the root user. To use this property,
nsx_isSSHEnabledmust be enabled.
Enter IP addresses for the default gateway, management network IPv4, management network netmask, DNS, and NTP IP address.For example:
<?xml version="1.0" encoding="UTF-8"?> <Environment xmlns="http://schemas.dmtf.org/ovf/environment/1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:oe="http://schemas.dmtf.org/ovf/environment/1"> <PropertySection> <Property oe:key="nsx_cli_passwd_0" oe:value="<password>"/> <Property oe:key="nsx_cli_audit_passwd_0" oe:value="<password>"/> <Property oe:key="nsx_passwd_0" oe:value="<password>"/> <Property oe:key="nsx_hostname" oe:value="nsx-manager1"/> <Property oe:key="nsx_role" oe:value="NSX Manager"/> <Property oe:key="nsx_isSSHEnabled" oe:value="True"/> <Property oe:key="nsx_allowSSHRootLogin" oe:value="True"/> <Property oe:key="nsx_dns1_0" oe:value="10.168.110.10"/> <Property oe:key="nsx_ntp_0" oe:value="10.168.110.10"/> <Property oe:key="nsx_domain_0" oe:value="corp.local"/> <Property oe:key="nsx_gateway_0" oe:value="10.168.110.83"/> <Property oe:key="nsx_netmask_0" oe:value="255.255.252.0"/> <Property oe:key="nsx_ip_0" oe:value="10.168.110.19"/> </PropertySection> </Environment>Note:
In the example,
nsx_allowSSHRootLoginare both enabled. When they are disabled, you cannot SSH or log in to the NSX Manager command line. If you enable
nsx_allowSSHRootLogin, you can SSH to NSX Manager but you cannot log in as root.
- Use guestfish to write the guestinfo.xml file into the QCOW2 image.
Note: After the guestinfo information is written into a QCOW2 image, the information cannot be overwritten.
sudo guestfish --rw -i -a nsx-unified-appliance-<BuildNumber>.qcow2 upload guestinfo /config/guestinfo
- Deploy the QCOW2 image with the virt-install command.
The vCPU and RAM values are suitable for a large VM. For details on other appliance sizes, see NSX Manager VM and Host Transport Node System Requirements. The network name and portgroup name are specific to your environment. The model must be virtio.
(On RHEL hosts)
sudo virt-install \ --import \ --ram 48000 \ --vcpus 12 \ --name <manager-name> \ --disk path=<manager-qcow2-file-path>,bus=virtio,cache=none \ --disk path=<secondary-qcow2-file-path>,bus=virtio,cache=none \ --network [bridge=<bridge-name> or network=<network-name>], portgroup=<portgroup-name>,model=virtio \ --noautoconsole \ --cpu mode=host-passthrough Starting install... Domain installation still in progress. Waiting for installation to complete.
(On Ubuntu hosts)
sudo virt-install \ --import \ --ram 48000 \ --vcpus 12 \ --name <manager-name> \ --disk path=<manager-qcow2-file-path>,bus=virtio,cache=none \ --disk path=<secondary-qcow2-file-path>,bus=virtio,cache=none \ --network [bridge=<bridge-name> or network=<network-name>], portgroup=<portgroup-name>,model=virtio \ --noautoconsole \ --cpu mode=host-passthrough,cache.mode=passthrough Starting install... Domain installation still in progress. Waiting for installation to complete.
- Verify that the NSX Manager is deployed.
virsh list --all Id Name State --------------------------------- 18 nsx-manager1 running
- Open the NSX Manager console and log in.
virsh console 18 Connected to domain nsx-manager1 Escape character is ^] nsx-manager1 login: admin Password:
- After the node boots, log in to the CLI as admin and run the get interface eth0 command to verify that the IP address was applied as expected.
- Enter the get services command to verify that all default services are running.
The following services are not required by default and do not start automatically.
migration-coordinator: This service is used only when running migration coordinator. See the NSX-T Data Center Migration Coordinator Guide before starting this service.
snmp: For information on starting SNMP see Simple Network Management Protocol in the NSX-T Data Center Administration Guide.
nsx-message-bus: This service is not used in NSX-T Data Center 3.0.
- Verify that your NSX Manager or Global Manager node has the required connectivity.
Make sure that you can perform the following tasks.
- Ping your node from another machine.
- The node can ping its default gateway.
- The node can ping the hypervisor hosts that are in the same network using the management interface.
- The node can ping its DNS server and its NTP Server IP or FQDN list.
- If you enabled SSH, make sure that you can SSH to your node.
If connectivity is not established, make sure that the network adapter of the virtual appliance is in the proper network or VLAN.
- Exit the KVM console.
- From a browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-ip-address>.