VMware provides the High-Performance Plug-in (HPP) to improve the performance of storage devices on your ESXi host.
The HPP replaces the NMP for high-speed devices, such as NVMe. The HPP is the default plug-in that claims NVMe-oF targets. ESXi supports end to end NVMe without emulation and SCSI to NVMe emulation. The HPP supports only active/active and implicit ALUA targets.
HPP Support | vSphere 8.0 Update 3 |
---|---|
Storage devices | Local NVMe and SCSI Shared NVMe-oF (active/active and implicit ALUA targets only) |
Multipathing | Yes |
Second-level plug-ins | No |
SCSI-3 persistent reservations | No |
4Kn devices with software emulation | Yes |
Path Selection Schemes
To support multipathing, the HPP uses the Path Selection Schemes (PSS) when selecting physical paths for I/O requests.
You can use the vSphere Client or the esxcli command to change the default path selection mechanism.
For information about configuring the path mechanisms in the vSphere Client, see Change the Path Selection Policy. To configure with the esxcli command, see ESXi esxcli HPP Commands.
ESXi supports the following path selection mechanisms.
- FIXED
-
With this scheme, a designated preferred path is used for I/O requests. If the preferred path is not assigned, the host selects the first working path discovered at the boot time. If the preferred path becomes unavailable, the host selects an alternative available path. The host returns to the previously defined preferred path when it becomes available again.
When you configure FIXED as a path selection mechanism, select the preferred path.
- LB-RR (Load Balance - Round Robin)
-
This is the default scheme for the devices claimed by HPP. After transferring a specified number of bytes or I/Os on a current path, the scheme selects the path using the round robin algorithm.
To configure the LB-RR path selection mechanism, specify the following properties:
- IOPS indicates the I/O count on the path to be used as criteria to switch a path for the device.
- Bytes indicates the byte count on the path to be used as criteria to switch a path for the device.
- LB-IOPS (Load Balance - IOPs)
-
After transferring a specified number of I/Os on a current path, default is 1000, the system selects an optimal path that has the least number of outstanding I/Os.
When configuring this mechanism, specify the IOPS parameter to indicate the I/O count on the path to be used as criteria to switch a path for the device.
- LB-BYTES (Load Balance - Bytes)
-
After transferring a specified number of bytes on a current path, default is 10 MB, the system selects an optimal path that has the least number of outstanding bytes.
To configure this mechanism, use the Bytes parameter to indicate the byte count on the path to be used as criteria to switch a path for the device.
- Load Balance - Latency (LB-Latency)
-
To achieve better load balancing results, the mechanism dynamically selects an optimal path by considering the following path characteristics:
- Latency evaluation time parameter indicates at what time interval, in milliseconds, the latency of paths must be evaluated.
- Sampling I/Os per path parameter controls how many sample I/Os must be issued on each path to calculate latency of the path.
HPP Best Practices
To achieve the fastest throughput from a high-speed storage device, follow these recommendations.
- Use the vSphere version that supports the HPP.
- Use the HPP for local NVMe and SCSI devices, and NVMe-oF devices.
- If you use NVMe over Fibre Channel devices, follow general recommendations for Fibre Channel storage. See Using ESXi with Fibre Channel SAN.
- If you use NVMe-oF, do not mix transport types to access the same namespace.
- When using NVMe-oF namespaces, make sure that active paths are presented to the host. The namespaces cannot be registered until the active path is discovered.
- When you configure your VMs, you can use VMware Paravirtual controllers or add NVMe controllers. Both types have their advantages and disadvantages. To check which works best for your environment, see SCSI, SATA, and NVMe Storage Controller Conditions, Limitations, and Compatibility in the vSphere Virtual Machine Administration documentation.
- Set the latency sensitive threshold.
- If a single VM drives a significant share of the device's I/O workload, consider spreading the I/O across multiple virtual disks. Attach the disks to separate virtual controllers in the VM.
Otherwise, I/O throughput might be limited due to saturation of the CPU core responsible for processing I/Os on a particular virtual storage controller.
For information about device identifiers for NVMe devices that support only NGUID ID format, see NVMe Devices with NGUID Device Identifiers.
Enable the High-Performance Plug-In and the Path Selection Schemes
The high-performance plug-in (HPP) is the default plug-in that claims local NVMe and SCSI devices, and NVMe-oF targets. If necessary, you can replace it with NMP. In vSphere version 7.0 Update 1 and earlier, NMP remains the default plug-in for local NVMe and SCSI devices, but you can replace it with HPP.
Use the esxcli storage core claimrule add command to enable the HPP or NMP on your ESXi host.
To run the esxcli storage core claimrule add, you can use the ESXi Shell or vSphere CLI. For more information, see Getting Started with ESXCLI and ESXCLI Reference.
Prerequisites
Set up your VMware NVMe storage environment. For more information, see About VMware NVMe Storage.
Procedure
Set Latency Sensitive Threshold
When you use the HPP for your storage devices, set the latency sensitive threshold for the device, so that I/O can avoid the I/O scheduler.
By default, ESXi passes every I/O through the I/O scheduler. However, using the scheduler might create internal queuing, which is not efficient with the high-speed storage devices.
You can configure the latency sensitive threshold and enable the direct submission mechanism that helps I/O to bypass the scheduler. With this mechanism enabled, the I/O passes directly from PSA through the HPP to the device driver.
For the direct submission to work properly, the observed average I/O latency must be lower than the latency threshold you specify. If the I/O latency exceeds the latency threshold, the system stops the direct submission and temporarily reverts to using the I/O scheduler. The direct submission is resumed when the average I/O latency drops below the latency threshold again.
You can set the latency threshold for a family of devices claimed by HPP. Set the latency threshold using the vendor and model pair, the controller model, or PCIe vendor ID and sub vendor ID pair.
Procedure
ESXi esxcli HPP Commands
You can use the ESXi Shell or vSphere CLI commands to configure and monitor the high-performance plug-in.
See Getting Started with ESXCLI for an introduction, and ESXCLI Reference for details of the esxcli command use.
Command | Description | Options |
---|---|---|
esxcli storage hpp path list | List the paths currently claimed by the high-performance plug-in. | -d|--device=device Display information for a specific device.
|
esxcli storage hpp device list | List the devices currently controlled by the high-performance plug-in. | -d|--device=device Show a specific device. |
esxcli storage hpp device set | Configure settings for an HPP device. | -B|--bytes=long Maximum bytes on the path, after which the path is switched.
-P|--pss=pss_name The path selection scheme to assign to the device. If you do not specify the value, the system selects the default. For the description of path selection schemes, see
VMware High Performance Plug-In and Path Selection Schemes. Options include:
|
esxcli storage hpp device usermarkedssd list | List the devices that were marked or unmarked as SSD by user. | -d|--device=device Limit the output to a specific device. |