VMware provides the High-Performance Plug-in (HPP) to improve the performance of NVMe devices on your ESXi host.
The HPP replaces the NMP for high-speed devices, such as NVMe. The HPP is the default plug-in that claims NVMe-oF targets. Within ESXi, the NVMe-oF targets are emulated and presented to users as SCSI targets. The HPP supports only active/active and implicit ALUA targets.
For local NVMe devices, NMP remains the default plug-in, but you can replace it with HPP.
|HPP Support||vSphere 7.0|
|Storage devices||Local NVMe PCIe
Shared NVMe-oF (active/active and implicit ALUA targets only)
Path Selection Schemes (PSS)
|SCSI-3 persistent reservations||No|
|4Kn devices with software emulation||No|
Path Selection Schemes
To support multipathing, the HPP uses the Path Selection Schemes (PSS) when selecting physical paths for I/O requests.
You can use the vSphere Client or the esxcli command to change the default path selection mechanism.
ESXi supports the following path selection mechanisms.
With this scheme, a designated preferred path is used for I/O requests. If the preferred path is not assigned, the host selects the first working path discovered at the boot time. If the preferred path becomes unavailable, the host selects an alternative available path. The host returns to the previously defined preferred path when it becomes available again.
When you configure FIXED as a path selection mechanism, select the preferred path.
- LB-RR (Load Balance - Round Robin)
This is the default scheme for the devices claimed by HPP. After transferring a specified number of bytes or I/Os on a current path, the scheme selects the path using the round robin algorithm.
To configure the LB-RR path selection mechanism, specify the following properties:
- IOPS indicates the I/O count on the path to be used as criteria to switch a path for the device.
- Bytes indicates the byte count on the path to be used as criteria to switch a path for the device.
- LB-IOPS (Load Balance - IOPs)
After transferring a specified number of I/Os on a current path, default is 1000, the system selects an optimal path that has the least number of outstanding I/Os.
When configuring this mechanism, specify the IOPS parameter to indicate the I/O count on the path to be used as criteria to switch a path for the device.
- LB-BYTES (Load Balance - Bytes)
After transferring a specified number of bytes on a current path, default is 10 MB, the system selects an optimal path that has the least number of outstanding bytes.
To configure this mechanism, use the Bytes parameter to indicate the byte count on the path to be used as criteria to switch a path for the device.
- Load Balance - Latency (LB-Latency)
To achieve better load balancing results, the mechanism dynamically selects an optimal path by considering the following path characteristics:
- Latency evaluation time parameter indicates at what time interval, in milliseconds, the latency of paths must be evaluated.
- Sampling I/Os per path parameter controls how many sample I/Os must be issued on each path to calculate latency of the path.
HPP Best Practices
To achieve the fastest throughput from a high-speed storage device, follow these recommendations.
- Use the vSphere version that supports the HPP.
- Use the HPP for NVMe local or networked devices.
- Do not activate the HPP for HDDs or slower flash devices. The HPP is not expected to provide any performance benefits with devices incapable of at least 200 000 IOPS.
- If you use NVMe over Fibre Channel devices, follow general recommendations for Fibre Channel storage. See Using ESXi with Fibre Channel SAN.
- If you use NVMe-oF, do not mix transport types to access the same namespace.
- When using NVMe-oF namespaces, make sure that active paths are presented to the host. The namespaces cannot be registered until the active path is discovered.
- Configure your VMs to use VMware Paravirtual controllers. See the vSphere Virtual Machine Administration documentation.
- Set the latency sensitive threshold.
- If a single VM drives a significant share of the device's I/O workload, consider spreading the I/O across multiple virtual disks. Attach the disks to separate virtual controllers in the VM.
Otherwise, I/O throughput might be limited due to saturation of the CPU core responsible for processing I/Os on a particular virtual storage controller.
For information about device identifiers for NVMe devices that support only NGUID ID format, see NVMe Devices with NGUID Device Identifiers.