VMware provides the High-Performance Plug-in (HPP) to improve the performance of ultra-fast local flash devices on your ESXi host.

The HPP replaces the NMP for high-speed devices, such as NVMe PCIe flash, installed on your host.

The HPP uses a direct I/O submission model, also called fast path, and does not require SATPs or PSPs. The plug-in directly submits I/Os to the local device using one path. Only single-pathed devices are supported.

The HPP is included with vSphere. The direct submission APIs can also be included in the multipathing plug-ins (MPPs) that third parties provide.

Any standalone ESXi hosts that require faster storage performance can benefit from the HPP.

HPP Requirements

The HPP requires the following infrastructure.
  • You use vSphere 6.7 or later.
  • Your ESXi host uses high-speed local flash devices for storage.

HPP Limitations

HPP does not support the following items that NMP typically supports.
  • Multipathing. The HPP claims the first path to a device and rejects the rest of the paths.
  • Second-level plug-ins, such as PSP and SATP.
  • SCSI-3 persistent reservations or any shared devices.
  • 4Kn devices with software emulation. You cannot use the HPP to claim these devices.

vSAN does not support HPP.

HPP Best Practices

To achieve the fastest throughput from a high-speed storage device, follow these recommendations.

  • Use the vSphere version that supports the HPP, such as vSphere 6.7 or later.
  • Use the HPP for high-speed local flash devices.
  • Do not activate the HPP for HDDs, slower flash devices, or remote storage. The HPP is not expected to provide any performance benefits with devices incapable of at least 200 000 IOPS.
  • Because ESXi is not expected to provide built-in claim rules for the HPP, enable the HPP using the esxcli command.
  • Configure your VMs to use VMware Paravirtual controllers. See the vSphere Virtual Machine Administration documentation.
  • Set the latency sensitive threshold.
  • If a single VM drives a significant share of the device's I/O workload, consider spreading the I/O across multiple virtual disks. Attach the disks to separate virtual controllers in the VM.

    Otherwise, I/O throughput might be limited due to saturation of the CPU core responsible for processing I/Os on a particular virtual storage controller.

For information about device identifiers for NVMe devices that support only NGUID ID format, see NVMe Devices with NGUID Device Identifiers.