Virtual N-Port ID Virtualization (NPIV) is an ANSI T11 standard that describes how a single Fibre Channel HBA port can register with the fabric using several worldwide port names (WWPNs). This allows a fabric-attached N-port to claim multiple fabric addresses. Each address appears as a unique entity on the Fibre Channel fabric. You can configure your vSphere virtual machines to use Fibre Channel NPIV.
How NPIV-Based LUN Access Works
NPIV enables a single FC HBA port to register several unique World Wide Name (WWN) identifiers with the fabric, each of which can be assigned to an individual virtual machine. When using NPIV, a SAN administrator can monitor and route storage access per a virtual machine.
Only virtual machines with RDMs can have WWN assignments, and they use these assignments for all RDM traffic.
When a virtual machine has a WWN assigned to it, the virtual machine’s configuration file (.vmx) is updated to include a WWN pair. The WWN pair consists of a World Wide Port Name (WWPN) and a World Wide Node Name (WWNN). When that virtual machine is powered on, the VMkernel creates a virtual port (VPORT) on the physical HBA which is used to access the LUN. The VPORT is a virtual HBA that appears to the FC fabric as a physical HBA. As its unique identifier, the VPORT uses the WWN pair that was assigned to the virtual machine.
Each VPORT is specific to the virtual machine. The VPORT is destroyed on the host and no longer appears to the FC fabric when the virtual machine is powered off. When a virtual machine is migrated from one host to another, the VPORT closes on the first host and opens on the destination host.
When virtual machines do not have WWN assignments, they access storage LUNs with the WWNs of their host’s physical HBAs.
Requirements for Using NPIV
If you plan to enable NPIV on your virtual machines, you should be aware of certain requirements.
- NPIV can be used only for virtual machines with RDM disks. Virtual machines with regular virtual disks use the WWNs of the host’s physical HBAs.
- HBAs on your host must support NPIV.
For information, see the VMware Compatibility Guide and refer to your vendor documentation.
- Use HBAs of the same type. VMware does not support heterogeneous HBAs on the same host accessing the same LUNs.
- If a host uses multiple physical HBAs as paths to the storage, zone all physical paths to the virtual machine. This is required to support multipathing even though only one path at a time will be active.
- Make sure that physical HBAs on the host can detect all LUNs that are to be accessed by NPIV-enabled virtual machines running on that host.
- The switches in the fabric must be NPIV-aware.
- When configuring a LUN for NPIV access at the storage level, make sure that the NPIV LUN number and NPIV target ID match the physical LUN and Target ID.
- Zone the NPIV WWPNs so that they connect to all storage systems the cluster hosts can access, even if the VM does not use the storage. If you add any new storage systems to a cluster with one or more NPIV-enabled VMs, add the new zones, so the NPIV WWPNs can detect the new storage system target ports.
NPIV Capabilities and Limitations
Learn about specific capabilities and limitations of the use of NPIV with ESXi.
- NPIV supports vMotion. When you use vMotion to migrate a virtual machine it retains the assigned WWN.
If you migrate an NPIV-enabled virtual machine to a host that does not support NPIV, VMkernel reverts to using a physical HBA to route the I/O.
- If your FC SAN environment supports concurrent I/O on the disks from an active-active array, the concurrent I/O to two different NPIV ports is also supported.
When you use ESXi with NPIV, the following limitations apply:
- Because the NPIV technology is an extension to the FC protocol, it requires an FC switch and does not work on the direct attached FC disks.
- When you clone a virtual machine or template with a WWN assigned to it, the clones do not retain the WWN.
- NPIV does not support Storage vMotion.
- Deactivating and then re-activating the NPIV capability on an FC switch while virtual machines are running can cause an FC link to fail and I/O to stop.
Configure or Modify WWN Assignments
Assign WWN settings to a virtual machine. You can later modify the WWN assignments.
You can create from 1 to 16 WWN pairs, which can be mapped to the first 1 to 16 physical FC HBAs on the host.
Typically, you do not need to change existing WWN assignments on your virtual machine. In certain circumstances, for example, when manually assigned WWNs are causing conflicts on the SAN, you might need to change or remove WWNs.
Prerequisites
- Before configuring WWN, ensure that the ESXi host can access the storage LUN access control list (ACL) configured on the array side.
- If you want to edit the existing WWNs, power off the virtual machine.
Procedure
- Right-click the virtual machine in the inventory and select Edit Settings.
- Click VM Options and expand Fibre Channel NPIV.
- Create or edit the WWN assignments by selecting one of the following options:
Option Description Temporarily disable NPIV for this virtual machine Deactivate but do not remove the existing WWN assignments for the virtual machine. Leave unchanged Retain the existing WWN assignments. The read-only WWN assignments section displays the node and port values of any existing WWN assignments. Generate new WWNs Generate new WWNs, overwriting any existing WWNs. The WWNs of the HBA are not affected. Specify the number of WWNNs and WWPNs. A minimum of two WWPNs are required to support failover with NPIV. Typically only one WWNN is created for each virtual machine. Remove WWN assignment Remove the WWNs assigned to the virtual machine. The virtual machine uses the HBA WWNs to access the storage LUN. - Click OK to save your changes.