This NFS design does not give specific vendor or array guidance. Consult your storage vendor for the configuration settings appropriate for your storage array.

NFS Storage Concepts

NFS (Network File System) presents file devices to an ESXi host for mounting over a network. The NFS server or array makes its local file systems available to ESXi hosts. The ESXi hosts access the metadata and files on the NFS array or server using a RPC-based protocol. NFS is implemented using Standard NIC that is accessed using a VMkernel port (vmknic).

NFS Load Balancing

No load balancing is available for NFS/NAS on vSphere because it is based on single session connections. You can configure aggregate bandwidth by creating multiple paths to the NAS array, and by accessing some datastores via one path, and other datastores via another path. You can configure NIC Teaming so that if one interface fails, another can take its place. However, these load balancing techniques work only in case of a network failure and might not be able to handle error conditions on the NFS array or on the NFS server. The storage vendor is often the source for correct configuration and configuration maximums.

NFS Versions

vSphere is compatible with both NFS version 3 and version 4.1; however, not all features can be enabled when connecting to storage arrays that use NFS v4.1. 
Table 1. Design Decisions on the NFS Version

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-VI-Storage-NFS-001

Use NFS v3 for all NFS datastores.

NFS v4.1 datastores are not supported with Storage I/O Control and with Site Recovery Manager.

NFS v3 does not support Kerberos authentication.

Storage Access

NFS v3 traffic is transmitted in an unencrypted format across the LAN. Therefore, best practice is to use NFS storage on trusted networks only and to isolate the traffic on dedicated VLANs.

Many NFS arrays have some built-in security, which enables them to control the IP addresses that can mount NFS exports. A best practice is to use this feature to configure the ESXi hosts that can mount the volumes that are being exported and have read/write access to those volumes. Such a configuration prevents unapproved hosts from mounting the NFS datastores.

Exports

All NFS exports are shared directories that sit on top of a storage volume. These exports control the access between the endpoints (ESXi hosts) and the underlying storage system. Multiple exports can exist on a single volume, with different access controls on each.

Export Size per Region

Size

vRealize Log Insight Archive

1 TB

Table 2. Design Decisions on the NFS Exports

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-VI-Storage-NFS-002

Create one export to support the archival functionality of vRealize Log Insight for log persistence.

The storage requirements of these management components are separate from the primary storage.

Dedicated exports can add management overhead to storage administrators in the following areas:

  • Creating and maintaining the export

  • Maintaining access to the vRealize Log Insight nodes if you expand the cluster outside the original design.

SDDC-VI-Storage-NFS-003

Place the VADP-based backup export on its own separate volume as per Design Decisions on Volume Assignment.

Backup activities are I/O intensive. Backup applications experience resource deficiency if they are placed on a shared volume.

Dedicated exports can add management overhead to storage administrators.

SDDC-VI-Storage-NFS-004

For each export, limit access to the application VMs or hosts requiring the ability to mount the storage only.

Limiting access helps ensure the security of the underlying data.

Securing exports individually can introduce operational overhead.