You must make an NFS or other shared storage volume accessible to all servers in your VMware Cloud Director server group. VMware Cloud Director uses the transfer server storage for appliance cluster management and for providing temporary storage for uploads, downloads, and catalog items that are published or subscribed externally.

Important: The VMware Cloud Director appliance supports only NFS type of shared storage. The appliance deployment process involves mounting the NFS shared transfer server storage. The VMware Cloud Director appliance also validates most details of the NFS share during deployment, including directory permissions and ownership. You must verify that a valid NFS mount point exists and is accessible to the VMware Cloud Director appliance instances.
Each member of the server group mounts this volume at the same mountpoint: /opt/vmware/vcloud-director/data/transfer. Space on this volume is consumed in many ways, including:
  • During transfers, uploads and downloads occupy this storage. When the transfer finishes, the uploads and downloads are removed from the storage. Transfers that make no progress for 60 minutes are marked as expired and cleaned up by the system. Because transferred images can be large, it is a good practice to allocate at least several hundred gigabytes for this use.
  • Catalog items in catalogs that are published externally and for which caching of the published content is enabled, occupy this storage. Items from catalogs that are published externally, but do not enable caching, do not occupy this storage. If you enable organizations in your cloud to create catalogs that are published externally, you can assume that hundreds or even thousands of catalog items require space on this volume. The size of each catalog item is about the size of a virtual machine in a compressed OVF form.
  • VMware Cloud Director stores the appliance database backups in the pgdb-backup directory in the transfer share. These backup bundles might consume significant space.
  • The multi-cell log bundle collector occupies this space.
  • Appliance nodes data and the response.properties file occupy this space.
Note: The volume of the transfer server storage must have the capacity for future expansion.
Note: NFS downtime can cause VMware Cloud Director appliance cluster functionalities to malfunction. The appliance management UI is unresponsive while the NFS is down or cannot be reached. Other functionalities that might be affected are the fencing out of a failed primary cell, switchover, promoting a standby cell, and so on.
Note: If you are using Debian-based Linux distributions for the NFS, the creation of database backups might fail. For more details and a workaround, see KB 94755.

Shared Storage Options

A traditional Linux-based NFS server or other solutions like Microsoft Windows Server, the VMware vSAN File Service NFS feature, and so on, can provide the shared storage. Starting with vSAN 7.0, you can use the vSAN File Service functionality to export NFS shares by using NFS 3.0 and NFS 4.1 protocols.
Important: If you want to export NFS shares using vSAN, you must use vSAN versions 7.0 U3 and later or 8.0 U1 and later.
For more information about vSAN File Service, see the Administering VMware vSAN guide in the VMware vSphere Product Documentation.

Requirements for Configuring the NFS Server

There are specific requirements for the NFS server configuration, so that VMware Cloud Director can write files to an NFS-based transfer server storage location and read files from it. Because of them, the vcloud user can perform the standard cloud operations and the root user can perform a multi-cell log collection.
  • The export list for the NFS server must allow for each server member in your VMware Cloud Director server group to have read-write access to the shared location that is identified in the export list. This capability allows the vcloud user to write files to and read files from the shared location.
  • The NFS server must allow read-write access to the shared location by the root system account on each server in your VMware Cloud Director server group. This capability allows for collecting the logs from all cells at once in a single bundle using the vmware-vcd-support script with its multi-cell options. You can meet this requirement by using no_root_squash in the NFS export configuration for this shared location.

Linux NFS Server Example

If the Linux NFS server has a directory named vCDspace as the transfer space for the VMware Cloud Director server group with location /nfs/vCDspace, to export this directory, you must ensure that its ownership and permissions are root:root and 750. The method for allowing read-write access to the shared location for three cells named vCD-Cell1-IP, vCD-Cell2-IP, and vCD-Cell3-IP is the no_root_squash method. You must add the following lines to the /etc/exports file.
/nfs/vCDspace vCD_Cell1_IP_Address(rw,sync,no_subtree_check,no_root_squash) 
/nfs/vCDspace vCD_Cell2_IP_Address(rw,sync,no_subtree_check,no_root_squash)
/nfs/vCDspace vCD_Cell3_IP_Address(rw,sync,no_subtree_check,no_root_squash)

There must be no space between each cell IP address and its immediate following left parenthesis in the export line. If the NFS server reboots while the cells are writing data to the shared location, the use of the sync option in the export configuration prevents data corruption in the shared location. The use of the no_subtree_check option in the export configuration improves reliability when a subdirectory of a file system is exported.

For each server in the VMware Cloud Director server group, you must have a corresponding entry in the NFS server's /etc/exports file so that they can all mount this NFS share. After changing the /etc/exports file on the NFS server, run exportfs -a to re-export all NFS shares.

What to do next

As part of your VMware Cloud Director appliance deployment preparation, you might want to see also Install and Configure NSX for VMware Cloud Director or Install and Configure NSX Data Center for vSphere for VMware Cloud Director.