In this section, you can learn about Network File System (NFS) node management and setting up a static persistent volume resource in NFS. This section refers to native clusters only.
VMware Cloud Director Container Service Extension can automatically add NFS nodes to the Kubernetes configuration when creating a new cluster. Cluster administrators can use the NFS nodes to implement static persistent volumes, which allows the deployment of stateful applications.
Static persistent volumes are pre-provisioned by the cluster administrator. They carry the details of the real storage that is available for use by cluster users. They exist in the Kubernetes API and are available for consumption. Users can allocate a static persistent volume by creating a persistent volume claim that requires the same or less storage. VMware Cloud Director Container Service Extension supports static volumes hosted on NFS. For more information, refer to Static persistent volumes.
NFS Volume Architecture
A NFS volume allows an existing NFS share to be mounted into one or more pods. When one or more pods are removed, the contents of the NFS volume are preserved and the volume is merely unmounted. This means that an NFS volume can be pre-populated with data, and that data can be distributed between pods. NFS can be mounted by multiple writers simultaneously.
To use NFS volumes, it is necessary to have an NFS server running with the shares exported. VMware Cloud Director Container Service Extension provides commands to add pre-configured NFS servers to any cluster.
In this section, you can learn how to manage NFS persistent volumes, including setting up a cluster with an NFS node, granting a persistent storage claim to an application, checking health, and cleaning up.
Create a cluster with an Attached NFS node
Follow these steps to create an Ubuntu based cluster and provision an attached NFS node.
vcd cli, run the following command:
# Login. vcd login cse.acme.scom devops imanadmin --password='T0pS3cr3t' # Create cluster with 2 worker nodes and NFS server node. vcd cse cluster create mycluster --nodes 2 \ --network mynetwork -t ubuntu-16.04_k8-1.13_weave-2.3.0 -r 1 --enable-nfs \ --ssh-key ~/.ssh/id_rsa.pubNote: The
--ssh-keyoption ensures nodes are provisioned with the user’s SSH key. The SSH key is necessary to login to the NFS host and set up shares.Note: This operation takes several minutes to complete while the VMware Cloud Director Container Service Extension extension builds the Kubernetes vApp.
- Optional: You can run the following command to add a node to an existing cluster:
# Add an NFS server (node of type NFS). vcd cse node create mycluster --nodes 1 --network mynetwork \ -t ubuntu-16.04_k8-1.13_weave-2.3.0 -r 1 --enable-nfs
Set Up NFS Shares
Follow these steps to create NFS shares that you can allocate through persistent volume resources.
vcd cli, enter the following command to add an independent disk to the NFS node to create an exportable file system.
# List the VMs in the vApp to find the NFS node. Look for a VM name that # starts with 'nfsd-', e.g., 'nfsd-ljsn'. Note the VM name and IP address. vcd vapp info mycluster # Create a 100Gb independent disk and attach to the NFS VM. vcd disk create nfs-shares-1 100g --description 'Kubernetes NFS shares' vcd vapp attach mycluster nfsd-ljsn nfs-shares-1
- Enter the following command to add
sshinto the NFS:
ssh firstname.lastname@example.org ... (root prompt appears) ...
- Enter the following command to partition and format the new disk:
Note: On Ubuntu, the disk appears as
root@nfsd-ljsn:~# parted /dev/sdb (parted) mklabel gpt Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue? Yes/No? yes (parted) unit GB (parted) mkpart primary 0 100 (parted) print Model: VMware Virtual disk (scsi) Disk /dev/sdb: 100GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 0.00GB 100GB 100GB primary (parted) quit root@nfsd-ljsn:~# mkfs -t ext4 /dev/sdb1 Creating filesystem with 24413696 4k blocks and 6111232 inodes Filesystem UUID: 8622c0f5-4044-4ebf-95a5-0372256b34f0 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
- Enter the following command to create a mount point, add the new partition to your list of file systems, and mount it:
mkdir /export echo '/dev/sdb1 /export ext4 defaults 0 0' >> /etc/fstab mount -a
The file system is now working under
- Enter the following command to create directories and share them through NFS:
cd /export mkdir vol1 vol2 vol3 vol4 vol5 vi /etc/exports ...Add following at end of file... /export/vol1 *(rw,sync,no_root_squash,no_subtree_check) /export/vol2 *(rw,sync,no_root_squash,no_subtree_check) /export/vol3 *(rw,sync,no_root_squash,no_subtree_check) /export/vol4 *(rw,sync,no_root_squash,no_subtree_check) /export/vol5 *(rw,sync,no_root_squash,no_subtree_check) ...Save and quit exportfs -r
The exportable NFS shares are prepared and you can logout of the NFS node.
What to do next
Once the exportable NFS shares are prepared, you can continue to create persistent volume resources.
Using Kubernetes Persistent Volumes
To use NFS shares, it is necessary to create persistent volume resources. To create these resources, complete the following steps.
vcd cli, use the following command to locate the
kubeconfigto access the new Kubernetes cluster:
vcd cse cluster config mycluster > mycluster.cfg export KUBECONFIG=$PWD/mycluster.cfg
- Use the following command to create a persistent volume resource for the share on
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolume metadata: name: nfs-vol1 spec: capacity: storage: 10Gi accessModes: - ReadWriteMany nfs: # Same IP as the NFS host we ssh'ed to earlier. server: 10.150.200.22 path: "/export/vol1" EOFNote: It is necessary for the path name to match the export name to avoid failures when Kubernetes tries to mount the NFS share to a pod.
- Use the following command to create a persistent volume claim that matches the persistent volume size.
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 10Gi EOF
- Launch an application that uses the persistent volume chain.
The following example runs an application in a couple of pods that write to the shared storage:
at <<EOF | kubectl apply -f - apiVersion: v1 kind: ReplicationController metadata: name: nfs-busybox spec: replicas: 2 selector: name: nfs-busybox template: metadata: labels: name: nfs-busybox spec: containers: - image: busybox command: - sh - -c - 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done' imagePullPolicy: IfNotPresent name: busybox volumeMounts: # name must match the volume name below - name: nfs mountPath: "/mnt" volumes: - name: nfs persistentVolumeClaim: claimName: nfs-pvc EOF
Check the operational health of an application
You can check the operational health of a deployed application and its storage.
- In kubectl, use the following command to ensure all resources are in good health:
kubectl get pv kubectl get pvc kubectl get rc kubectl get pods
- Use the following command to check the state of the storage:
$ kubectl exec -it nfs-busybox-gcnht cat /mnt/index.html Fri Dec 28 00:16:08 UTC 2018 nfs-busybox-gcnhtThis runs a command on one of the pods to check the storage state. In the above command, substitute the correct pod name from the
kubectl get podsoutput.Note: If you run the above command multiple times, the date and host changes as the pods write to the index.html file.
Cleaning Up Kubernetes Resources
In this section, you can learn how to clean up Kubernetes resources on kubectl.
- ♦ In kubectl, enter the following command to clean up Kubernetes resources:
kubectl delete rc/nfs-busybox kubectl delete pvc/nfs-pvc kubectl delete pv/nfs-vol1
Frequently Asked Questions
In this section, you can see the frequently asked questions users have for NFS management and the available solutions.
|What is the difference between a persistent volume (PV) and persistent volume claim (PVC)?||A persistent volume is ready-to-use storage space created by the cluster admin. VMware Cloud Director Container Service Extension currently only supports static persistent volumes. A persistent volume claim is the storage requirement specified by the user. Kubernetes dynamically binds/unbinds the PVC to PV at runtime. For more information, refer to the Kubernetes website.|
|How are NFS exports mounted to containers?||Once the cluster administrator creates a persistent volume backed by NFS, Kubernetes mounts the specified NFS export to pods and the containers they run.|
|What happens to storage when a Kubernetes application terminates?||Kubernetes returns the persistent volume and its claim to the pool. The data from the application remains on the volume. It can be cleaned up manually by logging into the NFS node VM and deleting files.|