When you use NFS storage with ESXi, follow specific guidelines related to NFS server configuration, networking, NFS datastores, and so on.

NFS Server Configuration

When you configure NFS servers to work with ESXi, follow recommendation of your storage vendor. In addition to these general recommendations, use specific guidelines that apply to NFS in vSphere environment.

The guidelines include the following items.

  • Make sure that the NAS servers you use are listed in the VMware HCL. Use the correct version for the server firmware.
  • Ensure that the NFS volume is exported using NFS over TCP.
  • Make sure that the NAS server exports a particular share as either NFS 3 or NFS 4.1. The NAS server must not provide both protocol versions for the same share. The NAS server must enforce this policy because ESXi does not prevent mounting the same share through different NFS versions.
  • NFS 3 and non-Kerberos (AUTH_SYS) NFS 4.1 do not support the delegate user functionality that enables access to NFS volumes using nonroot credentials. If you use NFS 3 or non-Kerberos NFS 4.1, ensure that each host has root access to the volume. Different storage vendors have different methods of enabling this functionality, but typically the NAS servers use the no_root_squash option. If the NAS server does not grant root access, you can still mount the NFS datastore on the host. However, you cannot create any virtual machines on the datastore.
  • If the underlying NFS volume is read-only, make sure that the volume is exported as a read-only share by the NFS server. Or mount the volume as a read-only datastore on the ESXi host. Otherwise, the host considers the datastore to be read-write and might not open the files.

NFS Networking

An ESXi host uses TCP/IP network connection to access a remote NAS server. Certain guidelines and best practices exist for configuring the networking when you use NFS storage.

For more information, see the vSphere Networking documentation.

  • For network connectivity, use a standard network adapter in your ESXi host.
  • ESXi supports Layer 2 and Layer 3 Network switches. If you use Layer 3 switches, ESXi hosts and NFS storage arrays must be on different subnets and the network switch must handle the routing information.
  • Configure a VMkernel port group for NFS storage. You can create the VMkernel port group for IP storage on an existing virtual switch (vSwitch) or on a new vSwitch. The vSwitch can be a vSphere Standard Switch (VSS) or a vSphere Distributed Switch (VDS).
  • If you use multiple ports for NFS traffic, make sure that you correctly configure your virtual switches and physical switches.
  • NFS 3 and NFS 4.1 support IPv6.
  • You can configure NFS storage with multiple connections by using the nconnect option. For NFS 4.1, you can create multiple connections per session. For NFS 3, you can mount the datastore with multiple connections. You can set a maximum of 4 connections per NFS datastore by default. However, you can increase it up to 8 by using the advanced NFS option. Ensure that the total number of connections across all mounted NFS datastores does not exceed 256. See Configure Multiple TCP Connections for NFS.
  • You can isolate NFS traffic to specific VMkernel adapters. Without binding, if the VMkernel adapter that ESXi uses for NFS traffic fails, the network infrastructure redirects the traffic to an alternative route. As a result, the NFS traffic might unintentionally flow through a random VMkernel adapter. VMkernel port binding for the NFS datastore allows you to bind an NFS volume to a specific VMkernel adapter to connect to an NFS server. See Configure VMkernel Binding for NFS Datastore.

NFS File Locking

File locking mechanisms are used to restrict access to data stored on a server to only one user or process at a time. The locking mechanisms of the two NFS versions are not compatible. NFS 3 uses proprietary locking and NFS 4.1 uses native protocol specified locking.

NFS 3 locking on ESXi does not use the Network Lock Manager (NLM) protocol. Instead, VMware provides its own locking protocol. NFS 3 locks are implemented by creating lock files on the NFS server. Lock files are named .lck-file_id..

NFS 4.1 uses share reservations as a locking mechanism.

Because NFS 3 and NFS 4.1 clients do not use the same locking protocol, you cannot use different NFS versions to mount the same datastore on multiple hosts. Accessing the same virtual disks from two incompatible clients might result in incorrect behavior and cause data corruption.

NFS Security

With NFS 3 and NFS 4.1, ESXi supports the AUTH_SYS security. In addition, for NFS 4.1, the Kerberos security mechanism is supported.

NFS 3 supports the AUTH_SYS security mechanism. With this mechanism, storage traffic is transmitted in an unencrypted format across the LAN. Because of this limited security, use NFS storage on trusted networks only and isolate the traffic on separate physical switches. You can also use a private VLAN.

NFS 4.1 supports the Kerberos authentication protocol to secure communications with the NFS server. Nonroot users can access files when Kerberos is used. For more information, see Using Kerberos for NFS 4.1 with ESXi.

In addition to Kerberos, NFS 4.1 supports traditional non-Kerberos mounts with the AUTH_SYS security. In this case, use root access guidelines for NFS version 3.
Note: You cannot use two security mechanisms, AUTH_SYS and Kerberos, for the same NFS 4.1 datastore shared by multiple hosts.

NFS Multipathing

NFS 4.1 supports multipathing as per protocol specifications. For NFS 3 multipathing is not applicable.

NFS 3 uses one TCP connection for I/O. As a result, ESXi supports I/O on only one IP address or hostname for the NFS server, and does not support multiple paths. Depending on your network infrastructure and configuration, you can use the network stack to configure multiple connections to the storage targets. In this case, you must have multiple datastores, each datastore using separate network connections between the host and the storage.

NFS 4.1 provides multipathing for servers that support the session trunking. When the trunking is available, you can use multiple IP addresses to access a single NFS volume. Client ID trunking is not supported.

NFS and Hardware Acceleration

Virtual disks created on NFS datastores are thin-provisioned by default. To be able to create thick-provisioned virtual disks, you must use hardware acceleration that supports the Reserve Space operation.

NFS 3 and NFS 4.1 support hardware acceleration that allows your host to integrate with NAS devices and use several hardware operations that NAS storage provides. For more information, see vSphere Hardware Acceleration on NAS Devices.

NFS Datastores

When you create an NFS datastore, make sure to follow specific guidelines.

The NFS datastore guidelines and best practices include the following items. To create an NFS datastores, see Create an NFS Datastore in vSphere Environment.
  • You cannot use different NFS versions to mount the same datastore on different hosts. NFS 3 and NFS 4.1 clients are not compatible and do not use the same locking protocol. As a result, accessing the same virtual disks from two incompatible clients might result in incorrect behavior and cause data corruption.
  • NFS 3 and NFS 4.1 datastores can coexist on the same host.
  • ESXi cannot automatically upgrade NFS version 3 to version 4.1, but you can use other conversion methods. For information, see NFS Upgrades.
  • When you mount the same NFS 3 volume on different hosts, make sure that the server and folder names are identical across the hosts. If the names do not match, the hosts see the same NFS version 3 volume as two different datastores. This error might result in a failure of such features as vMotion. An example of such discrepancy is entering filer as the server name on one host and filer.domain.com on the other. This guideline does not apply to NFS version 4.1.
  • If you use non-ASCII characters to name datastores and virtual machines, make sure that the underlying NFS server offers internationalization support. If the server does not support international characters, use only ASCII characters, or unpredictable failures might occur.

Using Layer 3 Routed Connections to Access NFS Storage with ESXi

When you use Layer 3 (L3) routed connections to access NFS storage, consider certain requirements and restrictions.

Ensure that your environment meets the following requirements:
  • Use Cisco's Hot Standby Router Protocol (HSRP) in IP Router. If you are using a non-Cisco router, use Virtual Router Redundancy Protocol (VRRP) instead.
  • To prioritize NFS L3 traffic on networks with limited bandwidths, or on networks that experience congestion, use Quality of Service (QoS). See your router documentation for details.
  • Follow Routed NFS L3 recommendations offered by storage vendor. Contact your storage vendor for details.
  • Deactivate Network I/O Resource Management (NetIORM).
  • If you are planning to use systems with top-of-rack switches or switch-dependent I/O device partitioning, contact your system vendor for compatibility and support.
In an L3 environment, the following restrictions apply:
  • The environment does not support VMware Site Recovery Manager.
  • The environment supports only the NFS protocol. Do not use other storage protocols such as FCoE over the same physical network.
  • The NFS traffic in this environment does not support IPv6.
  • The NFS traffic in this environment can be routed only over a LAN. Other environments such as WAN are not supported.

Firewall Configurations for NFS Storage ESXi

Learn about firewall rule set files, nfsClient and nfs41client, that ESXi creates when you mount NFS datatores version 3 or 4.1

For general information about firewall configurations, see Configuring the ESXi Firewall in the vSphere Security documentation.

NFS Client Firewall Behavior

The NFS Client firewall rule set behaves differently than other ESXi firewall rule sets. ESXi configures NFS Client settings when you mount or unmount an NFS datastore. The behavior differs for different versions of NFS.

When you add, mount, or unmount an NFS datastore, the resulting behavior depends on the version of NFS.

NFS v3 Firewall Behavior

When you add or mount an NFS v3 datastore, ESXi checks the state of the NFS Client (nfsClient) firewall rule set.

  • If the nfsClient rule set is deactivated, ESXi activates the rule set and deactivates the Allow All IP Addresses policy by setting the allowedAll flag to FALSE. The IP address of the NFS server is added to the allowed list of outgoing IP addresses.
  • If the nfsClient rule set is activated, the state of the rule set and the allowed IP address policy are not changed. The IP address of the NFS server is added to the allowed list of outgoing IP addresses.
Note: If you manually activate the nfsClient rule set or manually set the Allow All IP Addresses policy, either before or after you add an NFS v3 datastore to the system, your settings are overridden when the last NFS v3 datastore is unmounted. The nfsClient rule set is deactivated when all NFS v3 datastores are unmounted.

When you remove or unmount an NFS v3 datastore, ESXi performs one of the following actions.

  • If none of the remaining NFS v3 datastores are mounted from the server of the datastore being unmounted, ESXi removes the server's IP address from the list of outgoing IP addresses.
  • If no mounted NFS v3 datastores remain after the unmount operation, ESXi deactivates the nfsClient firewall rule set.

NFS v4.1 Firewall Behavior

When you mount the first NFS v4.1 datastore, ESXi activates the nfs41client rule set and sets its allowedAll flag to TRUE. This action opens port 2049 for all IP addresses. Unmounting an NFS v4.1 datastore does not affect the firewall state. That is, the first NFS v4.1 mount opens port 2049 and that port remains activated unless you close it explicitly.

Verify Firewall Ports for NFS Clients

To enable access to NFS storage, ESXi automatically opens firewall ports for the NFS clients when you mount an NFS datastore. For troubleshooting reasons, you might need to verify that the ports are open.

Procedure

  1. In the vSphere Client, navigate to the ESXi host.
  2. Click the Configure tab.
  3. Under System, click Firewall, and click Edit.
  4. Scroll down to an appropriate version of NFS to make sure that the port is open.