This section explains the system requirements of various clouds.

Note:

The NSX-V Cloud is longer supported. It is recommended to migrate to an NSX-T cloud connector, or switch to no-orchestrator mode with NSX-V.

VMware Ecosystems

The following are the VMware vCenter versions supported by the NSX Advanced Load Balancer:

NSX Advanced Load Balancer Controller Version

Virtual Hardware Version

17.2.x

10.0

18.1.x

10.0

20.1.1 to 20.1.5

10.0

20.1.6

11.0

21.1.x

11.0

22.1.x

11.0

Note:

When upgrading across releases having different virtual hardware versions, the existing SEs which are created with previous virtual hardware versions, will continue to work. However, new SEs are spawned with the updated virtual hardware version.

The NSX Advanced Load Balancer only works with the following switches in a vCenter Cloud:

  • Standard Virtual Switch (vSwitch)

  • Distributed Virtual Switch (vDS)

Also, the deployment of the Controller and SE OVA directly on an ESX host (for a no-access or other cloud-connector types) is not supported. They must be deployed using the vCenter UI, as the deployment requires Open Virtualization Format (OVF) properties to be configured.

As NSX Advanced Load Balancer deployment is supported from ESX version 6.0 and above, it is recommended to use VMware Hardware version 11. For more details, see https://kb.vmware.com/s/article/2007240.

As the VMware Hardware version 11 is supported for every ESX version above 6, NSX Advanced Load Balancer Service Engine virtual machine is programmed with VMware Hardware version 11. Upgrading the Hardware version manually is not recommended.

VMXNET3 network adapter is required for SE to operate in DPDK mode in VMware.

VMware NSX-T Interoperability Matrix

For VMware NSX-T Interoperability details, see Product Interoperability Matrix.

Jumbo-drivers Supported

Supported Drivers

Unsupported Drivers

i40e, mlx5, bnxt, vmxnet3

ixgbe

  • Jumbo frames are supported for non-DPDK mode and on VLAN interfaces for supported driver families. However, the jumbo frames are not supported on CSP.

    Note:

    vmxnet3 interface flaps during MTU change.

  • KNI MTU cannot exceed 1500 even when NIC MTU is configured.

  • se_mtu versus global_mtu: global_mtu is an SE property that is used to configure the interface MTU. This property can be used to accommodate any encapsulation overhead, that can enlarge the packet beyond the 1500 MTU.

  • You can replace global_mtu with se_mtu as se_mtu supports jumbo frame. The se_mtu configuration parameter/ field always overrides global_mtu, if configured. For instance, if you configure se_mtu to 9000, the system does not depend on global_mtu value.

    • Note:

      The global_mtu is retained only for backward compatibility, that is, if you configure global_mtu in an earlier release and do an upgrade, the global_mtu must still take effect unless you configure se_mtu later.

Supported OpenStack Version

For information on supported versions for OpenStack, see OpenStack Support Matrix.

Note:

The NSX Advanced Load Balancer does not support Neutron DVR mode. It supports Keystone v3 in NSX Advanced Load Balancer Heat resources.

No-Access Mode

No-Orchestrator: KVM (Red Hat/ CentOS- 7.6 and Ubuntu 16.04) with SR-IOV NICs only. For more information on Linux KVM with SR-IOV data NIC, see Linux KVM with SR-IOV data NIC.

Bare Metal (Linux Server Cloud)

Minimum NSX Advanced Load Balancer Controller Version

Bare Metal Hosts

18.1.2

  • OEL 6.9, 7.2, 7.3, 7.4, 7.5

  • RHEL 7.2, 7.3, 7.4, 7.5

  • CentOS 7.2, 7.3, 7.4, 7.5

18.1.5

  • OEL 7.6

  • RHEL 7.6

  • CentOS 7.6

18.2.6

  • OEL 7.7

  • RHEL 7.7

  • CentOS 7.7

18.2.9, 20.1.1

  • OEL 7.8

  • RHEL 7.8

  • CentoS 7.8

  • In 18.2.x: version 18.2.12 onwards

  • In 20.1.x: version 20.1.3 onwards

  • OEL 7.9

  • RHEL 7.9

  • CentoS 7.9

21.1.3

RHEL 8.4

All versions

Ubuntu 16.04

21.1.3

Ubuntu 18.04

21.1.3

Ubuntu 20.04

Note:
  • For OpenShift/ Kubernetes Clouds, the host OS on OpenShift/ Kubernetes nodes can have RHEL 7.9 from version 18.2.12 onwards.

  • Rollback operation is not supported once host OS is upgraded to RHEL 8.

For more information on Kernel supported version in bare metal hosts, see Kernel Supported Versions in the Bare metal section.

Bare-Metal NICs

  • For non-DPDK mode, the NSX Advanced Load Balancer supports any server NIC.

  • For DPDK mode (recommended), the NSX Advanced Load Balancer supports the following NICs:

Minimum NSX Advanced Load Balancer Version

NICs Name

NICs Supported

18.2.x and 20.1.x releases

Intel NICs

82599, X520, X540, X550, X552, X710, XL710, XXV710

18.2.2 and all 20.1.x releases

XXV710

18.2.x and 20.1.x releases

Mellanox NICs

NICs Supported: ConnectX-4 25G and ConnectX-4 40G

18.2.x and 20.1.x releases

NICs Supported: MCX4121A-ACAT ConnectX-4 Lx EN 25G NICs

20.1.6 and 21.1.1 releases

NICs Supported: Mellanox Technologies MT27800 Family [ConnectX-5]

21.1.1 onwards

MLNX_OFED version: MLNX_OFED_LINUX-5.1-2.5.8.0 (OFED-5.1-2.5.8)

18.2.8 and all 20.1.x releases

Broadcom NICs

BCM574XX NetXtreme-E family

Firmware version — 219.0.111.0/pkg 21.90.13.50

IPAM/ DNS

The following IPAM DNS are supported:

  • NSX Advanced Load Balancer DNS

  • AWS Route 53

  • Infoblox

  • Microsoft Azure

Hardware Security Module (HSM)

The following HSMs are supported:

  • SafeNet Network HSM Client Software Release 5.4.1 for 64-bit Linux

Supported Browsers

The following is the list of browsers supporting the NSX Advanced Load Balancer UI:

Browser

Minimum Version Supported

Google Chrome

86

Microsoft Edge

87

Mozilla Firefox

83

Safari

14