Before you deploy vSphere Container Storage Plug-in, review the prerequisites to ensure that you have set up everything you need for the installation.

vSphere Roles and Privileges

vSphere users for vSphere Container Storage Plug-in require a set of privileges to perform Cloud Native Storage operations.

To know how to create and assign a role, see vSphere Security Documentation.

You must create the following roles with sets of privileges:

Role Name Privilege Name Description Required On
CNS-Datastore Datastore > Low level file operations Allows performing read, write, delete, and rename operations in the datastore browser. Shared datastore where persistent volumes reside.
Note: vSphere Container Storage Plug-in version earlier than 2.2.0 required all shared datastores to have the Datastore.Low level file operations privilege. Starting from version 2.2.0, all shared datastores do not need to have the Datastore.Low level file operations privilege. During volume provisioning, vSphere Container Storage Plug-in skips shared datastores that do not have the Datastore.Low level file operations privilege. Volumes are not provisioned on these datastores.
CNS-HOST-CONFIG-STORAGE Host > Configuration > Storage partition configuration Allows vSAN datastore management. Required on a vSAN cluster with vSAN file service. Required for file volume only.
CNS-VM Virtual machine > Change Configuration > Add existing disk Allows adding an existing virtual disk to a virtual machine. All node VMs.
Virtual Machine > Change Configuration > Add or remove device Allows addition or removal of any non-disk device.
CNS-SEARCH-AND-SPBM CNS > Searchable Allows storage administrator to see Cloud Native Storage UI. Root vCenter Server.
VM storage policies > View VM storage policies Allows viewing of defined storage policies.
Read-only Default role Users with the Read Only role for an object are allowed to view the state of the object and details about the object. For example, users with this role can find the shared datastore accessible to all node VMs.

For topology-aware environments, all ancestors of node VMs, such as a host, cluster, folder, and data center, must have the Read-only role set for the vSphere user configured to use vSphere Container Storage Plug-in. This is required to allow reading tags and categories to prepare the nodes' topology.

All hosts where the nodes VMs reside.

Data center.

You need to assign roles to the vSphere objects participating in the Cloud Native Storage environment. Make sure to apply roles when a new entity, such as node VM or a datastore, is added in the vCenter Server inventory for the Kubernetes cluster.

The following sample vSphere inventory provides more information about roles assignment in vSphere objects.
sc2-rdops-vm06-dhcp-215-129.eng.vmware.com (vCenter Server)
|
|- datacenter (Data Center)
    |
    |-vSAN-cluster (cluster)
      |
      |-10.192.209.1 (ESXi Host)
      | |
      | |-k8s-control-plane (node-vm)
      |
      |-10.192.211.250 (ESXi Host)
      | |
      | |-k8s-node1 (node-vm)
      |
      |-10.192.217.166 (ESXi Host)
      | |
      | |-k8s-node2 (node-vm)
      | |
      |-10.192.218.26 (ESXi Host)
      | |
      | |-k8s-node3 (node-vm)
As an example, assume that each host has the following shared datastores along with some local VMFS datastores.
  • shared-vmfs.
  • shared-nfs.
  • vsanDatastore.
Role Usage
ReadOnly The screenshot shows the ReadOnly role assignment for vSphere objects.
CNS-HOST-CONFIG-STORAGE The screenshot shows the CNS-HOST-CONFIG-STORAGE role assignment for the vSphere object.
CNS-DATASTORE The screenshot shows the CNS-DATASTORE role assignment for vSphere objects.
CNS-VM The screenshot shows the CNS-VM role assignment for vSphere objects.
CNS-SEARCH-AND-SPBM The screenshot shows the CNS-SEARCH-AND-SPBM role assignment for the vSphere object.

Management Network for vSphere Container Storage Plug-in

By default, the vSphere Cloud Provider Interface and vSphere Container Storage Plug-in pods are scheduled on Kubernetes control plane nodes. For non-topology aware Kubernetes clusters, it is sufficient to provide the credentials of the control plane node to vCenter Server where this cluster is running. For topology-aware clusters, every Kubernetes node must discover its topology by communicating with vCenter Server. This is required to utilize the topology-aware provisioning and late binding feature.

For more information on providing vCenter Server credentials access to Kubernetes nodes, see Deploy vSphere Container Storage Plug-in with Topology.

Configure Kubernetes Cluster VMs

On each node VM that participates in the Kubernetes cluster with vSphere Container Storage Plug-in, you must enable the disk.EnableUUID parameter and perform other configuration steps.

Configure all VMs that form the Kubernetes cluster with vSphere Container Storage Plug-in. You can configure the VMs using the vSphere Client or the govc command-line tool.

Prerequisites

  • Create several VMs for your Kubernetes cluster.
  • On each node VM, install VMware Tools. For more information about installation, see Installing and upgrading VMware Tools in vSphere.
  • Required privilege: Virtual machine. Configuration. Settings.

Procedure

  1. Enable the disk.EnableUUID parameter using the vSphere Client.
    1. In the vSphere Client, right-click the VM and select Edit Settings.
    2. Click the VM Options tab and expand the Advanced menu.
    3. Click Edit Configuration next to Configuration Parameters.
    4. Configure the disk.EnableUUID parameter.
      If the parameter exists, make sure that its value is set to True. If the parameter is not present, add it and set its value to True.
      Name Value
      disk.EnableUUID True
  2. Upgrade the VM hardware version to 15 or higher.
    1. In the vSphere Client, navigate to the virtual machine.
    2. Select Actions > Compatibility > Upgrade VM Compatibility.
    3. Click Yes to confirm the upgrade.
    4. Select a compatibility and click OK.
  3. Add VMware Paravirtual SCSI storage controller to the VM.
    1. In the vSphere Client, right-click the VM and select Edit Settings.
    2. On the Virtual Hardware tab, click the Add New Device button.
    3. Select SCSI Controller from the drop-down menu.
    4. Expand New SCSI controller and from the Change Type menu, select VMware Paravirtual.
    5. Click OK.

Example

As an alternative, you can configure the VMs using the govc command-line tool.
  1. Install govc on your devbox/workstation.
  2. Obtain VM paths.
    $ export GOVC_INSECURE=1
      $ export GOVC_URL='https://<VC_Admin_User>:<VC_Admin_Passwd>@<VC_IP>'
    
      $ govc ls
      /<datacenter-name>/vm
      /<datacenter-name>/network
      /<datacenter-name>/host
      /<datacenter-name>/datastore
    
      // To retrieve all Node VMs
      $ govc ls /<datacenter-name>/vm
      /<datacenter-name>/vm/<vm-name1>
      /<datacenter-name>/vm/<vm-name2>
      /<datacenter-name>/vm/<vm-name3>
      /<datacenter-name>/vm/<vm-name4>
      /<datacenter-name>/vm/<vm-name5>
  3. To enable disk.EnableUUID, run the following command:
    govc vm.change -vm '/<datacenter-name>/vm/<vm-name1>' -e="disk.enableUUID=1"
  4. To upgrade the VM hardware version to 15 or higher, run the following command:
     govc vm.upgrade -version=15 -vm '/<datacenter-name>/vm/<vm-name1>'

Install vSphere Cloud Provider Interface

vSphere Container Storage Plug-in requires that you install a Cloud Provider Interface on your Kubernetes cluster in the vSphere environment. Follow this procedure to install the vSphere Cloud Provider Interface (CPI).

Prerequisites

Ensure that you have the following permissions before you install a Cloud Provider Interface on your Kubernetes cluster in the vSphere environment:
  • Read permission on the parent entities of the node VMs such as folder, host, datacenter, datastore folder, and datastore cluster.

Procedure

  1. Before you install CPI, verify that all nodes, including the control plane nodes, are tainted with node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule.
    To taint nodes, use the kubectl taint node <node-name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule command.
    When the kubelet is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint.
  2. Identify the Kubernetes major version. For example, if the major version is 1.22.x, then run the following.
    VERSION=1.22
    wget https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/release-$VERSION/releases/v$VERSION/vsphere-cloud-controller-manager.yaml
    
  3. Create a vsphere-cloud-config configmap of vSphere configuration.
    Note: This is used for CPI. There is a separate secret required for vSphere Container Storage Plug-in.

    Modify the vsphere-cloud-controller-manager.yaml file downloaded in step 2 and update vCenter Server information. For example, see an excerpt of the vsphere-cloud-controller-manager.yaml file with dummy values as shown below.

    apiVersion: v1
    kind: Secret
    metadata:
      name: vsphere-cloud-secret
      labels:
        vsphere-cpi-infra: secret
        component: cloud-controller-manager
      namespace: kube-system
      # NOTE: this is just an example configuration, update with real values based on your environment
    stringData:
      10.185.0.89.username: "[email protected]"
      10.185.0.89.password: "Admin!23"
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: vsphere-cloud-config
      labels:
        vsphere-cpi-infra: config
        component: cloud-controller-manager
      namespace: kube-system
    data:
      # NOTE: this is just an example configuration, update with real values based on your environment
      vsphere.conf: |
        # Global properties in this section will be used for all specified vCenters unless overriden in VirtualCenter section.
        global:
          port: 443
          # set insecureFlag to true if the vCenter uses a self-signed cert
          insecureFlag: true
          # settings for using k8s secret
          secretName: vsphere-cloud-secret
          secretNamespace: kube-system
    
        # vcenter section
        vcenter:
          my-vc-name:
            server: 10.185.0.89
            user: [email protected]
            password: Admin!23
            datacenters:
              - VSAN-DC 
  4. Apply the release manifest with updated values for the config map.

    This action creates Roles, Roles Bindings, Service Account, Service and the cloud-controller-manager pod.

    # kubectl apply -f vsphere-cloud-controller-manager.yaml
    serviceaccount/cloud-controller-manager created
    secret/vsphere-cloud-secret created
    configmap/vsphere-cloud-config created
    rolebinding.rbac.authorization.k8s.io/servicecatalog.k8s.io:apiserver-authentication-reader created
    clusterrolebinding.rbac.authorization.k8s.io/system:cloud-controller-manager created
    clusterrole.rbac.authorization.k8s.io/system:cloud-controller-manager created
    daemonset.apps/vsphere-cloud-controller-manager created
  5. To remove vsphere.conf file created at /etc/kubernetes/, run the following command.
    rm vsphere-cloud-controller-manager.yaml
    Note:
    • You can use the external custom cloud provider CPI with vSphere Container Storage Plug-in.

Configure CoreDNS for vSAN File Share Volumes

vSphere Container Storage Plug-in requires DNS forwarding configuration in CoreDNS ConfigMap to help resolve vSAN file share host name.

Procedure

  • Modify the CoreDNS ConfigMap and add the conditional forwarder configuration.
    kubectl -n kube-system edit configmap coredns

    The output of this step is shown below.

     .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
       }
       vsanfs-sh.prv:53 {
       errors
       cache 30
       forward . 10.161.191.241
       }
    In this configuration,
    • vsanfs-sh.prv is the DNS suffix for vSAN file service.
    • 10.161.191.241 is the DNS server that resolves the file share host name.

    You can obtain the DNS suffix and DNS IP address from vCenter Server using the following menu options:

    vSphere Cluster > Configure > vSAN > Services > File Service