VMware provides the High-Performance Plug-in (HPP) to improve the performance of storage devices on your ESXi host.

The HPP replaces the NMP for high-speed devices, such as NVMe. The HPP is the default plug-in that claims NVMe-oF targets. Within ESXi, the NVMe-oF targets are emulated and presented to users as SCSI targets. The HPP supports only active/active and implicit ALUA targets.

In vSphere version 7.0 Update 1 and earlier, NMP remains the default plug-in for local NVMe devices, but you can replace it with HPP. Starting from vSphere 7.0 Update 2, HPP becomes the default plug-in for local NVMe and SCSI devices, but you can replace it with NMP.

HPP Support vSphere 7.0 Update 1 vSphere 7.0 Update 2 and Update 3
Storage devices Local NVMe PCIe

Shared NVMe-oF (active/active and implicit ALUA targets only)

Local NVMe and SCSI

Shared NVMe-oF (active/active and implicit ALUA targets only)

Multipathing Yes Yes
Second-level plug-ins No

Path Selection Schemes (PSS)

No
SCSI-3 persistent reservations No No
4Kn devices with software emulation No Yes

Path Selection Schemes

To support multipathing, the HPP uses the Path Selection Schemes (PSS) when selecting physical paths for I/O requests.

You can use the vSphere Client or the esxcli command to change the default path selection mechanism.

For information about configuring the path mechanisms in the vSphere Client, see Change the Path Selection Policy. To configure with the esxcli command, see ESXi esxcli HPP Commands.

ESXi supports the following path selection mechanisms.

FIXED
With this scheme, a designated preferred path is used for I/O requests. If the preferred path is not assigned, the host selects the first working path discovered at the boot time. If the preferred path becomes unavailable, the host selects an alternative available path. The host returns to the previously defined preferred path when it becomes available again.

When you configure FIXED as a path selection mechanism, select the preferred path.

LB-RR (Load Balance - Round Robin)
This is the default scheme for the devices claimed by HPP. After transferring a specified number of bytes or I/Os on a current path, the scheme selects the path using the round robin algorithm.
To configure the LB-RR path selection mechanism, specify the following properties:
  • IOPS indicates the I/O count on the path to be used as criteria to switch a path for the device.
  • Bytes indicates the byte count on the path to be used as criteria to switch a path for the device.
LB-IOPS (Load Balance - IOPs)
After transferring a specified number of I/Os on a current path, default is 1000, the system selects an optimal path that has the least number of outstanding I/Os.

When configuring this mechanism, specify the IOPS parameter to indicate the I/O count on the path to be used as criteria to switch a path for the device.

LB-BYTES (Load Balance - Bytes)
After transferring a specified number of bytes on a current path, default is 10 MB, the system selects an optimal path that has the least number of outstanding bytes.

To configure this mechanism, use the Bytes parameter to indicate the byte count on the path to be used as criteria to switch a path for the device.

Load Balance - Latency (LB-Latency)
To achieve better load balancing results, the mechanism dynamically selects an optimal path by considering the following path characteristics:
  • Latency evaluation time parameter indicates at what time interval, in milliseconds, the latency of paths must be evaluated.
  • Sampling I/Os per path parameter controls how many sample I/Os must be issued on each path to calculate latency of the path.

HPP Best Practices

To achieve the fastest throughput from a high-speed storage device, follow these recommendations.

  • Use the vSphere version that supports the HPP.
  • Use the HPP for local NVMe and SCSI devices, and NVMe-oF devices.
  • If you use NVMe over Fibre Channel devices, follow general recommendations for Fibre Channel storage. See Using ESXi with Fibre Channel SAN.
  • If you use NVMe-oF, do not mix transport types to access the same namespace.
  • When using NVMe-oF namespaces, make sure that active paths are presented to the host. The namespaces cannot be registered until the active path is discovered.
  • When you configure your VMs, you can use VMware Paravirtual controllers or add NVMe controllers. Both types have their advantages and disadvantages. To check which works best for your environment, see SCSI, SATA, and NVMe Storage Controller Conditions, Limitations, and Compatibility in the vSphere Virtual Machine Administration documentation.
  • Set the latency sensitive threshold.
  • If a single VM drives a significant share of the device's I/O workload, consider spreading the I/O across multiple virtual disks. Attach the disks to separate virtual controllers in the VM.

    Otherwise, I/O throughput might be limited due to saturation of the CPU core responsible for processing I/Os on a particular virtual storage controller.

For information about device identifiers for NVMe devices that support only NGUID ID format, see NVMe Devices with NGUID Device Identifiers.