When you design the physical NFS configuration, consider disk type and size, networking between the storage and the ESXi hosts, and volumes according to the data you are going to store.

This NFS design is not related to a specific vendor or array guidance. Consult your storage vendor for the configuration settings appropriate for your storage array.

NFS Load Balancing

No load balancing is available for NFS/NAS on vSphere because it is based on single session connections. You can configure aggregate bandwidth by creating multiple paths to the NAS array, and by accessing some datastores via one path, and other datastores via another path. You can configure NIC Teaming so that if one interface fails, another can take its place. However, these load balancing techniques work only if a network failure occurs. They might not be able to handle error conditions on the NFS array or on the NFS server. The storage vendor is often the source for correct configuration and configuration maximums.

NFS Versions

vSphere supports both version 3 and version 4.1 of NFS. However, VMware Cloud Foundation supports only version 3, because not all vSphere features are available when connecting to storage arrays that use NFS version 4.1, like Storage I/O Control (SIOC).

Table 1. Design Decisions on the NFS Version

Decision ID

Design Decision

Design Justification

Design Implication


Use NFS version 3 for all NFS datastores.

You cannot use Storage I/O Control with NFS version 4.1 datastores.

NFS version 3 does not support Kerberos authentication.

NFS Physical Requirements

To use NFS storage in the management domain, your environment must meet certain requirements for networking and bus technology.

  • All connections must be on 10 Gbps Ethernet minimum, with 25 Gbps Ethernet recommended.

  • Jumbo frames are enabled.

  • 10K SAS drives or faster are used in the storage array.

  • You can combine different disk speeds and disk types in an array to create different performance and capacity tiers. The management cluster uses 10K SAS drives in the RAID configuration recommended by the array vendor to achieve the required capacity and performance.

Table 2. Design Decisions on NFS Hardware

Decision ID

Design Decision

Design Justification

Design Implication


  • Consider 10k SAS drives a base line performance requirement. Greater performance might be needed according to the scale and growth profile of the environment.

  • Consider the number and performance of disks backing supplemental storage NFS volumes.

10K SAS drives provide a balance between performance and capacity. You can use faster drives. vStorage API for Data Protection-based backups require high- performance datastores to meet backup SLAs.

10K SAS drives are more expensive than other alternatives.

vSphere Storage APIs - Array Integration

The VMware vSphere Storage APIs for Array Integration (VAAI) supports a set of ESXCLI commands for enabling communication between ESXi hosts and storage devices. Using this API/CLI has several advantages.

The APIs define a set of storage primitives that enable the ESXi host to offload certain storage operations to the array for hardware acceleration. Offloading the operations reduces resource overhead on the ESXi hosts and can significantly improve performance for storage-intensive operations such as storage cloning, zeroing, and so on. The goal of hardware acceleration is to help storage vendors provide hardware assistance to speed up VMware I/O operations that are more efficiently accomplished in the storage hardware. These operations consume fewer CPU cycles and less bandwidth on the storage fabric.

Table 3. Design Decision on the Integration of vStorage APIs for Array

Decision ID

Design Decision

Design Justification

Design Implication


Select an array that supports vStorage APIs for Array Integration (VAAI) over NAS (NFS).

  • VAAI offloads tasks to the array itself, enabling the ESXi hypervisor to use its resources for application workloads and not become a bottleneck in the storage subsystem.

  • VAAI is required to support the target number of virtual machine life cycle operations in this design.

Not all arrays support VAAI over NFS. For the arrays that support VAAI, to enable VAAI over NFS, you must install a plug-in from the array vendor .

NFS Volumes

Select a volume configuration for NFS storage in the management domain according to the requirements of the management applications that use the storage.

  • Multiple datastores can be created on a single volume for applications that do not have a high I/O footprint.

  • For high I/O applications, such as backup applications, use a dedicated volume to avoid performance issues.

  • For other applications, set up Storage I/O Control (SIOC) to limit the storage use by high I/O applications so that other applications get the I/O capacity they are requesting.

Table 4. Design Decisions on NFS Volume Assignment

Decision ID

Design Decision

Design Justification

Design Implication


Use a dedicated NFS volume to support image-level backup requirements.

The backup and restore process is I/O intensive. Using a dedicated NFS volume ensures that the process does not impact the performance of other management components.

Dedicated volumes add management overhead to storage administrators. Dedicated volumes might use more disks, according to the array and type of RAID.


Use a shared volume for other management component datastores.

Non-backup related management applications can share a common volume because of the lower I/O profile of these applications.

Enough storage space for shared volumes and their associated application data must be available.

NFS Exports

All NFS exports are shared directories that sit on top of a storage volume. These exports control the access between the ESXi host endpoints and the underlying storage system. Multiple exports can exist on a single volume, with different access controls on each.
Table 5. Design Decisions on NFS Exports

Decision ID

Design Decision

Design Justification

Design Implication


For each export, limit access to the application virtual machines or hosts requiring the ability to mount the storage only.

Limiting access helps ensure the security of the underlying data.

Securing exports individually can introduce operational overhead.