When you design the physical NFS configuration, consider disk type and size, networking between the storage and the ESXi hosts, and volumes according to the data you are going to store.
NFS Load Balancing
No load balancing is available for NFS/NAS on vSphere because it is based on single session connections. You can configure aggregate bandwidth by creating multiple paths to the NAS array, and by accessing some datastores via one path, and other datastores via another path. You can configure NIC Teaming so that if one interface fails, another can take its place. However, these load balancing techniques work only if a network failure occurs. They might not be able to handle error conditions on the NFS array or on the NFS server. The storage vendor is often the source for correct configuration and configuration maximums.
NFS Versions
vSphere supports both version 3 and version 4.1 of NFS. However, VMware Cloud Foundation supports only version 3, because not all vSphere features are available when connecting to storage arrays that use NFS version 4.1, like Storage I/O Control (SIOC).
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NFS-CFG-002 |
Use NFS version 3 for all NFS datastores. |
You cannot use Storage I/O Control with NFS version 4.1 datastores. |
NFS version 3 does not support Kerberos authentication. |
NFS Physical Requirements
To use NFS storage in the management domain, your environment must meet certain requirements for networking and bus technology.
-
All connections must be on 10 Gbps Ethernet minimum, with 25 Gbps Ethernet recommended.
-
Jumbo frames are enabled.
-
10K SAS drives or faster are used in the storage array.
-
You can combine different disk speeds and disk types in an array to create different performance and capacity tiers. The management cluster uses 10K SAS drives in the RAID configuration recommended by the array vendor to achieve the required capacity and performance.
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NFS-CFG-003 |
|
10K SAS drives provide a balance between performance and capacity. You can use faster drives. vStorage API for Data Protection-based backups require high- performance datastores to meet backup SLAs. |
10K SAS drives are more expensive than other alternatives. |
vSphere Storage APIs - Array Integration
The VMware vSphere Storage APIs for Array Integration (VAAI) supports a set of ESXCLI commands for enabling communication between ESXi hosts and storage devices. Using this API/CLI has several advantages.
The APIs define a set of storage primitives that enable the ESXi host to offload certain storage operations to the array for hardware acceleration. Offloading the operations reduces resource overhead on the ESXi hosts and can significantly improve performance for storage-intensive operations such as storage cloning, zeroing, and so on. The goal of hardware acceleration is to help storage vendors provide hardware assistance to speed up VMware I/O operations that are more efficiently accomplished in the storage hardware. These operations consume fewer CPU cycles and less bandwidth on the storage fabric.
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NFS-CFG-004 |
Select an array that supports vStorage APIs for Array Integration (VAAI) over NAS (NFS). |
|
Not all arrays support VAAI over NFS. For the arrays that support VAAI, to enable VAAI over NFS, you must install a plug-in from the array vendor . |
NFS Volumes
Select a volume configuration for NFS storage in the management domain according to the requirements of the management applications that use the storage.
-
Multiple datastores can be created on a single volume for applications that do not have a high I/O footprint.
-
For high I/O applications, such as backup applications, use a dedicated volume to avoid performance issues.
-
For other applications, set up Storage I/O Control (SIOC) to limit the storage use by high I/O applications so that other applications get the I/O capacity they are requesting.
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NFS-CFG-005 |
Use a dedicated NFS volume to support image-level backup requirements. |
The backup and restore process is I/O intensive. Using a dedicated NFS volume ensures that the process does not impact the performance of other management components. |
Dedicated volumes add management overhead to storage administrators. Dedicated volumes might use more disks, according to the array and type of RAID. |
VCF-MGMT-NFS-CFG-006 |
Use a shared volume for other management component datastores. |
Non-backup related management applications can share a common volume because of the lower I/O profile of these applications. |
Enough storage space for shared volumes and their associated application data must be available. |
NFS Exports
Decision ID |
Design Decision |
Design Justification |
Design Implication |
---|---|---|---|
VCF-MGMT-NFS-CFG-007 |
For each export, limit access to the application virtual machines or hosts requiring the ability to mount the storage only. |
Limiting access helps ensure the security of the underlying data. |
Securing exports individually can introduce operational overhead. |