ESXi supports Fibre Channel (FC), a storage protocol that the SAN uses to transfer data traffic from hosts to shared storage. This section provides introductory information about how to use ESXi with Fibre Channel SAN. For more information, check your vendor documentation.

Fibre Channel SAN Concepts

If you are a vSphere administrator planning to set up hosts to work with SANs, you must have a working knowledge of SAN concepts. You can find information about SANs in print and on the Internet. Because this industry changes constantly, check these resources frequently.
Storage Area Network (SAN)
A storage area network (SAN) is a specialized high-speed network that connects host servers to high-performance storage subsystems. The SAN components include host bus adapters (HBAs) in the host servers, switches that help route storage traffic, cables, storage processors (SPs), and storage disk arrays.
SAN Fabric
A SAN topology with at least one switch present on the network forms a SAN fabric.
Fibre Channel (FC) Protocol
To transfer traffic from host servers to shared storage, the SAN uses the Fibre Channel (FC) protocol that packages SCSI or NVMe commands into Fibre Channel frames.
Zoning
To restrict server access to storage arrays not allocated to that server, the SAN uses zoning. Typically, zones are created for each group of servers that access a shared group of storage devices and LUNs. Zones define which HBAs can connect to which SPs. Devices outside a zone are not visible to the devices inside the zone.

Zoning has the following effects:

  • Reduces the number of targets and LUNs presented to a host.
  • Controls and isolates paths in a fabric.
  • Can prevent non-ESXi systems from accessing a particular storage system, and from possibly destroying VMFS data.
  • Can be used to separate different environments, for example, a test from a production environment.

With ESXi hosts, use a single-initiator zoning or a single-initiator-single-target zoning. The latter is a preferred zoning practice. Using the more restrictive zoning prevents problems and misconfigurations that can occur on the SAN.

For detailed instructions and best zoning practices, contact storage array or switch vendors.

LUN Masking
Zoning is similar to LUN masking, which is commonly used for permission management. LUN masking is a process that makes a LUN available to some hosts and unavailable to other hosts.
Multipathing
When transferring data between the host server and storage, the SAN uses a technique known as multipathing. Multipathing allows you to have more than one physical path from the ESXi host to a LUN on a storage system.
Path Failover
Generally, a single path from a host to a LUN consists of an HBA, switch ports, connecting cables, and the storage controller port. If any component of the path fails, the host selects another available path for I/O. The process of detecting a failed path and switching to another is called path failover.

Ports in Fibre Channel SAN

In the context of this document, a port is the connection from a device into the SAN. Each node in the SAN, such as a host, a storage device, or a fabric component has one or more ports that connect it to the SAN. Ports are identified in a number of ways.

WWPN (World Wide Port Name)
A globally unique identifier for a port that allows certain applications to access the port. The FC switches discover the WWPN of a device or host and assign a port address to the device.
Port_ID (or port address)
Within a SAN, each port has a unique port ID that serves as the FC address for the port. This unique ID enables routing of data through the SAN to that port. The FC switches assign the port ID when the device logs in to the fabric. The port ID is valid only while the device is logged on.

When N-Port ID Virtualization (NPIV) is used, a single FC HBA port (N-port) can register with the fabric by using several WWPNs. This method allows an N-port to claim multiple fabric addresses, each of which appears as a unique entity. When ESXi hosts use a SAN, these multiple, unique identifiers allow the assignment of WWNs to individual virtual machines as part of their configuration.

Fibre Channel Storage Array Types

ESXi supports different storage systems and arrays. They generally fall into these categories.

Active-active storage system
Supports access to the LUNs simultaneously through all the storage ports that are available without significant performance degradation. All the paths are active, unless a path fails.
Active-passive storage system
A system in which one storage processor is actively providing access to a given LUN. The other processors act as a backup for the LUN and can be actively providing access to other LUN I/O. I/O can be successfully sent only to an active port for a given LUN. If access through the active storage port fails, one of the passive storage processors can be activated by the servers accessing it.
Asymmetrical storage system
Supports Asymmetric Logical Unit Access (ALUA). ALUA-compliant storage systems provide different levels of access per port. With ALUA, the host can determine the states of target ports and prioritize paths. The host uses some of the active paths as primary, and uses others as secondary.

How Virtual Machines Access Data on a Fibre Channel SAN

ESXi stores a virtual machine's disk files within a VMFS datastore that resides on a SAN storage device. When virtual machine guest operating systems send SCSI or NVMe commands to their virtual disks, the SCSI or NVMe virtualization layer translates these commands to VMFS file operations.

When a virtual machine interacts with its virtual disk stored on a SAN, the following process takes place:

  1. When the guest operating system in a virtual machine reads or writes to a SCSI or NVMe disk, it sends SCSI or NVMe commands to the virtual disk.
  2. Device drivers in the virtual machine’s operating system communicate with the virtual SCSI or NVMe controllers.
  3. The virtual SCSI or NVMe controller forwards the command to the VMkernel.
  4. The VMkernel performs the following tasks.
    1. Locates the appropriate virtual disk file in the VMFS volume.
    2. Maps the requests for the blocks on the virtual disk to blocks on the appropriate physical device.
    3. Sends the modified I/O request from the device driver in the VMkernel to the physical HBA.
  5. The physical HBA performs the following tasks.
    1. Packages the I/O request according to the rules of the FC protocol.
    2. Transmits the request to the SAN.
  6. Depending on a port the HBA uses to connect to the fabric, one of the SAN switches receives the request. The switch routes the request to the appropriate storage device.