VMware provides the High-Performance Plug-in (HPP) to improve the performance of storage devices on your ESXi host.

The HPP replaces the NMP for high-speed devices, such as NVMe. The HPP is the default plug-in that claims NVMe-oF targets. ESXi supports end to end NVMe without emulation and SCSI to NVMe emulation. The HPP supports only active/active and implicit ALUA targets.

HPP Support vSphere 8.0 Update 3
Storage devices Local NVMe and SCSI

Shared NVMe-oF (active/active and implicit ALUA targets only)

Multipathing Yes
Second-level plug-ins No
SCSI-3 persistent reservations No
4Kn devices with software emulation Yes

Path Selection Schemes

To support multipathing, the HPP uses the Path Selection Schemes (PSS) when selecting physical paths for I/O requests.

You can use the vSphere Client or the esxcli command to change the default path selection mechanism.

For information about configuring the path mechanisms in the vSphere Client, see Change the Path Selection Policy. To configure with the esxcli command, see ESXi esxcli HPP Commands.

ESXi supports the following path selection mechanisms.

FIXED
With this scheme, a designated preferred path is used for I/O requests. If the preferred path is not assigned, the host selects the first working path discovered at the boot time. If the preferred path becomes unavailable, the host selects an alternative available path. The host returns to the previously defined preferred path when it becomes available again.

When you configure FIXED as a path selection mechanism, select the preferred path.

LB-RR (Load Balance - Round Robin)
This is the default scheme for the devices claimed by HPP. After transferring a specified number of bytes or I/Os on a current path, the scheme selects the path using the round robin algorithm.
To configure the LB-RR path selection mechanism, specify the following properties:
  • IOPS indicates the I/O count on the path to be used as criteria to switch a path for the device.
  • Bytes indicates the byte count on the path to be used as criteria to switch a path for the device.
LB-IOPS (Load Balance - IOPs)
After transferring a specified number of I/Os on a current path, default is 1000, the system selects an optimal path that has the least number of outstanding I/Os.

When configuring this mechanism, specify the IOPS parameter to indicate the I/O count on the path to be used as criteria to switch a path for the device.

LB-BYTES (Load Balance - Bytes)
After transferring a specified number of bytes on a current path, default is 10 MB, the system selects an optimal path that has the least number of outstanding bytes.

To configure this mechanism, use the Bytes parameter to indicate the byte count on the path to be used as criteria to switch a path for the device.

Load Balance - Latency (LB-Latency)
To achieve better load balancing results, the mechanism dynamically selects an optimal path by considering the following path characteristics:
  • Latency evaluation time parameter indicates at what time interval, in milliseconds, the latency of paths must be evaluated.
  • Sampling I/Os per path parameter controls how many sample I/Os must be issued on each path to calculate latency of the path.

HPP Best Practices

To achieve the fastest throughput from a high-speed storage device, follow these recommendations.

  • Use the vSphere version that supports the HPP.
  • Use the HPP for local NVMe and SCSI devices, and NVMe-oF devices.
  • If you use NVMe over Fibre Channel devices, follow general recommendations for Fibre Channel storage. See Using ESXi with Fibre Channel SAN.
  • If you use NVMe-oF, do not mix transport types to access the same namespace.
  • When using NVMe-oF namespaces, make sure that active paths are presented to the host. The namespaces cannot be registered until the active path is discovered.
  • When you configure your VMs, you can use VMware Paravirtual controllers or add NVMe controllers. Both types have their advantages and disadvantages. To check which works best for your environment, see SCSI, SATA, and NVMe Storage Controller Conditions, Limitations, and Compatibility in the vSphere Virtual Machine Administration documentation.
  • Set the latency sensitive threshold.
  • If a single VM drives a significant share of the device's I/O workload, consider spreading the I/O across multiple virtual disks. Attach the disks to separate virtual controllers in the VM.

    Otherwise, I/O throughput might be limited due to saturation of the CPU core responsible for processing I/Os on a particular virtual storage controller.

For information about device identifiers for NVMe devices that support only NGUID ID format, see NVMe Devices with NGUID Device Identifiers.

Enable the High-Performance Plug-In and the Path Selection Schemes

The high-performance plug-in (HPP) is the default plug-in that claims local NVMe and SCSI devices, and NVMe-oF targets. If necessary, you can replace it with NMP. In vSphere version 7.0 Update 1 and earlier, NMP remains the default plug-in for local NVMe and SCSI devices, but you can replace it with HPP.

Use the esxcli storage core claimrule add command to enable the HPP or NMP on your ESXi host.

To run the esxcli storage core claimrule add, you can use the ESXi Shell or vSphere CLI. For more information, see Getting Started with ESXCLI and ESXCLI Reference.

Examples in this topic demonstrate how to enable HPP and set up the path selection schemes (PSS).
Note: Enabling the HPP is not supported on PXE booted ESXi hosts.

Prerequisites

Set up your VMware NVMe storage environment. For more information, see About VMware NVMe Storage.

Procedure

  1. Create an HPP claim rule by running the esxcli storage core claimrule add command.
    Use one of the following methods to add the claim rule.
    Method Description
    Based on the NVMe controller model esxcli storage core claimrule add –-type vendor --nvme-controller-model

    For example, esxcli storage core claimrule add --rule 429 --type vendor --nvme-controller-model "ABCD*" --plugin HPP

    Based on the PCI vendor ID and subvendor ID esxcli storage core claimrule add –-type vendor –-pci-vendor-id –-pci-sub-vendor-id

    For example, esxcli storage core claimrule add --rule 429 --type vendor --pci-vendor-id 8086 --pci-sub-vendor-id 8086 --plugin HPP.

  2. Configure the PSS.
    Use one of the following methods.
    Method Description
    Set the PSS based on the device ID esxcli storage hpp device set

    For example, esxcli storage hpp device set --device=device --pss=FIXED --path=preferred path

    Set the PSS based on the vendor/model Use the --config-string option with the esxcli storage core claimrule add command.

    For example, esxcli storage core claimrule add -r 914 -t vendor -V vendor -M model -P HPP --config-string "pss=LB-Latency,latency-eval-time=40000"

  3. Reboot your host for your changes to take effect.

Set Latency Sensitive Threshold

When you use the HPP for your storage devices, set the latency sensitive threshold for the device, so that I/O can avoid the I/O scheduler.

By default, ESXi passes every I/O through the I/O scheduler. However, using the scheduler might create internal queuing, which is not efficient with the high-speed storage devices.

You can configure the latency sensitive threshold and enable the direct submission mechanism that helps I/O to bypass the scheduler. With this mechanism enabled, the I/O passes directly from PSA through the HPP to the device driver.

For the direct submission to work properly, the observed average I/O latency must be lower than the latency threshold you specify. If the I/O latency exceeds the latency threshold, the system stops the direct submission and temporarily reverts to using the I/O scheduler. The direct submission is resumed when the average I/O latency drops below the latency threshold again.

You can set the latency threshold for a family of devices claimed by HPP. Set the latency threshold using the vendor and model pair, the controller model, or PCIe vendor ID and sub vendor ID pair.

Procedure

  1. Set the latency sensitive threshold for the device by running the following command:
    esxcli storage core device latencythreshold set -t value in milliseconds

    Use one of the following options.

    Option Example
    Vendor/model Set the latency sensitive threshold parameter for all devices with the indicated vendor and model: esxcli storage core device latencythreshold set -v 'vendor1' -m 'model1' -t 10
    NVMe controller model Set the latency sensitive threshold for all NVMe devices with the indicated controller model: esxcli storage core device latencythreshold set -c 'controller_model1' -t 10
    PCIe vendor/subvendor ID Set the latency sensitive threshold for devices with 0x8086 as PCIe vendor ID and 0x8086 as PCIe sub vendor ID. esxcli storage core device latencythreshold set -p '8086' -s '8086' -t 10
  2. Verify that the latency threshold is set:
    esxcli storage core device latencythreshold list
    Device                Latency Sensitive Threshold
    --------------------  ---------------------------
    naa.55cd2e404c1728aa               0 milliseconds
    naa.500056b34036cdfd               0 milliseconds
    naa.55cd2e404c172bd6              50 milliseconds
    
  3. Monitor the status of the latency sensitive threshold. Check VMkernel logs for the following entries:
    • Latency Sensitive Gatekeeper turned on for device device. Threshold of XX msec is larger than max completion time of YYY msec
    • Latency Sensitive Gatekeeper turned off for device device. Threshold of XX msec is exceeded by command completed in YYY msec

ESXi esxcli HPP Commands

You can use the ESXi Shell or vSphere CLI commands to configure and monitor the high-performance plug-in.

See Getting Started with ESXCLI for an introduction, and ESXCLI Reference for details of the esxcli command use.

Command Description Options
esxcli storage hpp path list List the paths currently claimed by the high-performance plug-in. -d|--device=device Display information for a specific device.

-p|--path=path Limit the output to a specific path.

esxcli storage hpp device list List the devices currently controlled by the high-performance plug-in. -d|--device=device Show a specific device.
esxcli storage hpp device set Configure settings for an HPP device. -B|--bytes=long Maximum bytes on the path, after which the path is switched.

--cfg-file Update the configuration file and runtime with the new setting. If the device is claimed by another PSS, ignore any errors when applying to runtime configuration.

-d|--device=device The HPP device upon which to operate. Use any of the UIDs that the device reports. Required.

-I|--iops=long Maximum IOPS on the path, after which the path is switched.

-T|--latency-eval-time=long Control at what interval, in ms, the latency of paths must be evaluated.

-L|--mark-device-local=bool Set HPP to treat the device as local or not.

-M|--mark-device-ssd=bool Specify whether or not the HPP treats the device as an SSD.

-p|--path=str The path to set as the preferred path for the device.

-P|--pss=pss_name The path selection scheme to assign to the device. If you do not specify the value, the system selects the default. For the description of path selection schemes, see VMware High Performance Plug-In and Path Selection Schemes. Options include:
  • FIXED

    Use the -p|--path=str suboption to set the preferred path.

  • LB-Bytes

    Use the -B|--bytes=long suboption to specify the input.

  • LB-IOPs

    Use the -I|--iops=long suboption to specify the input.

  • LB-Latency

    Suboptions include:

    -T|--latency-eval-time=long

    -S|--sampling-ios-per-path=long

  • LB-RR Default

    Suboptions include:

    -B|--bytes=long

    -I|--iops=long

-S|--sampling-ios-per-path=long Control how many sample I/Os must be issued on each path to calculate latency of the path.

-U|--use-ano=bool Set the option to true to include non-optimized paths in the set of active paths used to issue I/Os on this device. Otherwise, set the option to false.

esxcli storage hpp device usermarkedssd list List the devices that were marked or unmarked as SSD by user. -d|--device=device Limit the output to a specific device.