Before you upgrade NSX-T Data Center, perform the pre-upgrade tasks to ensure that the upgrade is successful.
Procedure
- If you are upgrading from a version earlier than NSX-T Data Center 3.0, provision a secondary disk of exactly 100 GB capacity on all NSX Manager appliances.
- For an ESXi host, login to vCenter Server, navigate to the NSX Manager VM and add a disk of exactly 100GB capacity.
- For a KVM host:
- Create a disk of 100 GB capacity:
qemu-img create -f qcow2 nsx-unified-appliance-secondary.qcow2 100G
- Create an xml file ( /<download folder>/<nsx_manger_vm_name_storage_file.xml>) for the additional storage:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/<diskPath>/nsx-unified-appliance-secondary.qcow2'/> <target dev='vdb' bus='virtio' /> </disk>
- Make the VM persistent using the following commands:
virsh dumpxml <NSX Manager VM> > /<download folder>/<nsx_manager_vm_name.xml> virsh define /<download folder>/<nsx_manager_vm_name.xml> virsh list –all
- Attach the secondary disk to the NSX Manager appliance:
virsh attach-device --config <NSX Manager VM> /<download folder>/<nsx_manger_vm_name_storage_file.xml>
- Shutdown and start the NSX Manager appliance:
virsh shutdown <NSX Manager VM> virsh start <NSX Manager VM>
- Repeat the process for the other NSX Manager appliances in the cluster.
Note: Reboot the appliance if the secondary disk is not detected by the Upgrade Coordinator. Run the pre-check to ensure that the secondary disk is detected and proceed with other NSX Manager appliances. - Create a disk of 100 GB capacity:
- If you are upgrading from a version earlier than NSX-T Data Center 3.0, disable inter-SR (service router) routing before you begin the upgrade. For more information on upgrade scenarios and workaround options, see the VMware knowledge base article at https://kb.vmware.com/s/article/85288.
- Ensure that your transport node profiles have the appropriate transport zones added to them. NSX Manager may not display the list of transport node profiles if any of the transport node profiles do not have transport zones added to them.
- Ensure that you backup the NSX Manager before you start the upgrade process. See the NSX-T Data Center Administration Guide.
- Ensure that your host OS is supported for NSX Manager. See Supported Hosts for NSX Managers in the NSX-T Data Center Administration Guide
- Disable automatic backups before you start the upgrade process. See the NSX-T Data Center Administration Guide for more information on configuring backups.
- Terminate any active SSH sessions or local shell scripts that may be running on the NSX Manager or the NSX Edge nodes, before you begin the upgrade process.
- Ensure that the appropriate communication ports are open from the Transport and Edge nodes to the NSX Managers:
- TCP ports 1234 to NSX Manager
- TCP port 1235 to NSX Controller
- TCP port 9040 between the NSX Manager nodes.
- Keep port 5671 open during the upgrade process.
NSX Cloud Note: Starting with NSX-T Data Center 2.5.1, NSX Cloud supports communication on port 80 between the Cloud Service Manager appliance installed on-prem with the NSX Public Cloud Gateway installed in your public cloud VPC/VNet. NSX-T Data Center versions 2.5.0 and earlier require port 7442 for this communication. During the upgrade from versions 2.5.0 and earlier to 2.5.1, keep port 7442 open. See https://ports.esp.vmware.com/home/NSX for more information. - For NSX-T Data Center 3.0, you need a valid license to use licensed features like T0, T1, Segments, and NSX intelligence. Ensure that you have a valid license.
- Ensure that you perform an automated pre-check to verify that the NSX-T Data Center components are ready for upgrade. The pre-check process scans for component activity, version compatibility, component status of the hosts, NSX Edge, and management plane. Resolve any warning notifications to avoid problems during the upgrade.
- Delete all expired user accounts before you begin upgrade. Upgrade for NSX-T Data Center on vSphere fails if your exception list for vSphere lockdown mode includes expired user accounts. For more information on accounts with access privileges in lockdown mode, see Specifying Accounts with Access Privileges in Lockdown Mode in the vSphere Security Guide.