When a storage policy is not specified in the StorageClass, the vSphere Container Storage Plug-in looks for a shared, accessible datastore that can be accessed by all nodes in the Kubernetes cluster.
To determine which datastore can be accessed by all nodes, the vSphere Container Storage Plug-in identifies the ESXi hosts where the nodes are placed. It then identifies the datastores that are mounted on those ESXi hosts. The vSphere Container Storage Plug-in supplies this information to the CreateVolume API, which selects the datastore with the highest capacity from the supplied datastores for volume provisioning.
However, if the nodes are not distributed across all ESXi hosts in the vSphere cluster and are instead placed on a subset of ESXi hosts, and if that subset of ESXi hosts has some shared datastores, the volume might get provisioned on those datastores. Later, when you add a new node to another ESXi host that does not have access to the shared datastore accessible to the subset of ESXi hosts, the provisioned volume cannot be used on the newly added node.
This situation also applies to topology-aware setups. For example, when an availability zone has only a single node, the volume might get provisioned on the ESXi host where the node is located. Later, when you add a new node to the availability zone, the volume provisioned on the local datastore cannot be used with the new node.
To avoid this situation, use a storage policy in the StorageClass to select a datastore, so that the cluster can be scaled without any issues. For example, if you have several nodes in the vSphere cluster and you want to use a datastore that is accessible to all ESXi hosts in the cluster, define a storage policy that is compliant with that datastore. Then specify the storage policy in the StorageClass when provisioning a volume. As a result, you can avoid provisioning the volume on a datastore shared among a few ESXi hosts and a local datastore. And new nodes can be added easily.