This topic provides instructions to deploy the Kubernetes cluster with vSAN enabled.

Note: If the deployment host does not have Internet connectivity to download the required files from VMware Customer Connect, follow the steps described in the DarkSite Deployment section to get the files onto the deployment host. Then proceed with the remaining steps in this section.

Procedure

  1. Log in to the deployment host.
  2. Download the Deployment Container onto the deployment host. The container has tools and utilities needed for Kubernetes deployment. Pull the Deployment Container from the registry projects.registry.vmware.com/tcx/ which is publicly accessible and is hosted on VMware registry by running the following command using the specific tag.
    $ docker pull projects.registry.vmware.com/tcx/deployment:2.4.1-167
    Or the recommended way is using SHA256 digest to pull the container as given in the following command. Login to the VMware Distribution Harbor and navigate to project tcx and navigate again to tcx/deployment and copy the docker pull command.
    $ docker pull projects.registry.vmware.com/tcx/deployment@sha256:<SHA-DIGEST>
    Note: To verify the downloaded package, run the following command on your deployment host. Compare SHA256 fingerprint in the DIGEST column with the DIGEST in the previous pull command and ensure they match. You can ignore the created date, typically an older date, seen against the container image. There is no functional impact.
    $ docker images --digests
  3. Download the K8s Installer from VMware Customer Connect onto the deployment host under the home directory. Typically this package is named as VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz. For example, VMware-K8s-Installer-2.0.6-418.tar.gz.
    Note: To verify the downloaded package, run the following command on your deployment host.
    $ sha256sum VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz
    This command displays the SHA256 fingerprint of the file. Compare this string with the SHA256 fingerprint provided next to the file in the VMware Customer Connect download site and ensure that they match.
  4. Extract the K8s Installer as follows. This creates a folder called k8s-installer under the home directory.
    $tar -xzvf VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz
    Note: Always consistently extract the K8s Installer within the /root directory.
  5. Navigate to the k8s-installer directory and verify that there are 2 directories named scripts and cluster.
    Note: By default, the Kubernetes install logs are stored under $HOME/k8s-installer/ansible.log. If you want to change the log location, then update the log_path variable in the file $HOME/k8s-installer/scripts/ansible/ansible.cfg.
  6. Launch the Deployment Container as follows:
    docker run \
                 --rm \
                 -v $HOME:/root \
                 -v $HOME/.ssh:/root/.ssh \
                 -v $HOME/.kube:/root/.kube \
                 -v /var/run/docker.sock:/var/run/docker.sock \
                 -v $(which docker):/usr/local/bin/docker:ro \
                 -v $HOME/.docker:/root/.docker:ro \
                 -v /etc/docker:/etc/docker:rw \
                 -v /opt:/opt \
                 --network host \
                 -it projects.registry.vmware.com/tcx/deployment:2.4.1-167 \
                 bash
  7. Update the deployment parameters by editing the /root/k8s-installer/scripts/ansible/vars.yml file inside the Deployment Container.
    1. Configure the general parameters.
      Note: Set the values according to your environment.
      cluster_name: <your-cluster-name>  # Unique name for your cluster
      ansible_user: <your-SSH-username>    # SSH username for ClusterNode VMs
      ansible_become_password: <your-password>  # SSH password for ClusterNode VMs

      Update the admin_public_keys_path with the path of public key generated during SSH key generation.

      admin_public_keys_path: /root/.ssh/id_rsa.pub # Path to the SSH public key. This will be a .pub file under $HOME/.ssh/
      Update the control_plane_ips and worker_node_ips as specified in the following format.
      Note: For Production footprint, refer the VMware Telco Cloud Service Assurance Sizing Sheet to get the number of ControlNode and Worker Node VMs.
      control_plane_ips: # The list of control plane IP addresses of your VMs.This should be a YAML list.
       - <IP1>
       - <IP2>
       - <IP3>
      worker_node_ips: # The list of worker nodes IP addresses of your VMs.This should be a YAML list.
       - <IP4>
       - <IPn>
    2. Update the Deployment Host IP and the YUM server details.
      ## Deployment host IP address
      ## Make sure firewall is disabled in deployment host
      # The IP address of your deployment host
      deployment_host_ip:<your-deployment-host-ip>  
      ## default value is http. Use https for secure communication.
      yum_protocol: http
      # The IP address/hostname of your yum/package repositoy
      yum_server: <your-yum-server-ip>
    3. For Harbor Container Registry, uncomment and update the harbor_registry_ip parameter with the selected static IP address. The free Static IP must be in the same subnet as that of the managment IP's of the Cluster Nodes.
      ### Harbor parameters ###
      ## The static IP address to be used for Harbor Container Registry
      ## This IP address must be in the same subnet as the VM IPs.
      harbor_registry_ip: <static-IPAddress>
    4. Set the following parameter to a location that has sufficient storage space for storing all application data. Ensure that in the below example /mnt file system must be having 200 GB of storage space, and must have 744 permission.
      storage_dir: /mnt
    5. For storage related parameters, uncomment and set the following parameters to true.
      ### Storage related parameters ###
      use_external_storage: true
      install_vsphere_csi: true
    6. If using vSAN or ESXi SAN VMFS datastore, uncomment and update the following VMware vCenter parameters.
      Note: If you do not want to provide the VMware vCenter password in the plain text format, you can comment the #vcenter_password: <your-vCenter-password>. During the Kubernetes cluster creation, vCenter passowrd will be prompted.
      ### vCenter parameters for using vSAN storage ###
      vcenter_ip: <your-vCenter-IP>
      vcenter_name: <your-vCenter-name>
      vcenter_username: <your-vCenter-username>
      vcenter_password: <your-vCenter-password>
      ## List of data centers that are part of your vSAN cluster
      vcenter_data_centers: 
       - <your-datacenter>  
      vcenter_insecure: true # True, if using self signed certificates
      ## The datastore URL. To locate, go to your vCenter -> datastores -> your vSAN datastore -> Summary -> URL
      datastore_url: <your-datastore-url>
      Note: Ensure that the storage size of the vSAN datastore is as per the sizing sheet guidelines for Production deployment.
    Here is the sample snippet of the vars.yaml file:
    cluster_name: vmbased-oracle-prod-vsan
    ansible_user: root
    ansible_become_password: dangerous
    admin_public_keys_path: /root/.ssh/id_rsa.pub
    control_plane_ips:
      - 10.220.143.240
      - 10.220.143.248
      - 10.220.143.221
    worker_node_ips:
      - 10.220.143.163
      - 10.220.143.245
      - 10.220.143.182
      - 10.220.143.113
      - 10.220.143.37
      - 10.220.143.203
      - 10.220.143.108
      - 10.220.143.132
      - 10.220.143.56
    ## Deployment host IP address
    ## Make sure firewall is disabled in deployment host
    deployment_host_ip: 10.1.1.1
    ## default value is http. Use https for secure communication.
    yum_protocol: http
    ## IP address/hostname of yum/package repo
    yum_server: 10.198.x.x
    ### Harbor parameters ###
    ## (Optional) The IP address to be used for the Harbor container registry, if static IPs are available.
    ## This IP address must be in the same subnet as the VM IPs.
    harbor_registry_ip: 10.1.1.2
    ## When using local storage (Direct Attached Storage), set this to a location that has sufficient storage space for storing all application data
    storage_dir: /mnt
    ### Storage related parameters ###
    use_external_storage: "true"
    install_vsphere_csi: "true"
    ### vCenter parameters for using external storage (VMFS or vSAN datastores) ###
    vcenter_ip: 10.x.x.x
    vcenter_name: vcenter01.vmware.com
    vcenter_username: [email protected]
    vcenter_password: xxxxxxxxx
    ## List of data centers that are part of your cluster
    vcenter_data_centers:
      - testdatacenter
    vcenter_insecure: "true"
    ## The datastore URL. To locate, go to your vCenter -> datastores -> your datastore -> Summary -> URL
    ## Note: All VMs must be on the same datastore!
    datastore_url: ds:///vmfs/volumes/vsan:527e4e6193eacd65-602e106ffe383d68/
    Note:
    • vcenter_ip: IP address or the FQDN of the vCenter.
    • vcenter_name: Name of the vCenter as shown in the vSphere Console (after logging in to the vCenter using vSphere Console).
  8. Execute the prepare command inside the Deployment Container.
    Note: If you have used Non-Empty Passphrase for SSH Key generation (required for passwordless SSH communication), then you must execute the following commands inside the Deployment Container, before running the Ansible script.
    [root@wdc-10-214-147-149 ~]# eval "$(ssh-agent -s)”
    Agent pid 3112829
    [root@wdc-10-214-147-149 ~]# ssh-add ~/.ssh/id_rsa
    Enter passphrase for /root/.ssh/id_rsa:                       <==Enter the NON-EMPTY Passphrase that is being provided during the NON-EMPTY ssh-key Generation process
    Identity added: /root/.ssh/id_rsa ([email protected])
    cd /root/k8s-installer/
    export ANSIBLE_CONFIG=/root/k8s-installer/scripts/ansible/ansible.cfg
    ansible-playbook scripts/ansible/prepare.yml -e @scripts/ansible/vars.yml --become 
    Note: There will be some fatal messages which will be displayed on the console and ignored by the Ansible script during execution. These messages does not have any functional impact and can be safely ignored.
  9. Execute the Kubernetes Cluster installation command inside the Deployment Container.
    Note: If the vCenter password is commented in the vars.yaml file, then you will be prompted to provide the vCenter password when the following Ansible Script is executed.
    cd /root/k8s-installer/
    ansible-playbook  -i inventory/<your-cluster-name>/hosts.yml scripts/ansible/deploy_k8s.yml -u <your-SSH-username> --become -e @scripts/ansible/internal_vars.yml -e @scripts/ansible/vars.yml
    Note:
    • There will be some fatal messages which will be displayed on the console and ignored by the Ansible script during execution. These messages does not have any functional impact and can be safely ignored.
    • Ensure that the Kubernetes installation is successful and the below successful message is displayed on the console k8s Cluster Deployment successful.
  10. Verify the node status after the Kubernetes cluster deployment.
    kubectl get nodes
    Note: Ensure that all the nodes are in ready state before starting the VMware Telco Cloud Service Assurance deployment.
  11. Verify the Harbor pods are up and running.
    kubectl get pods | grep harbor
  12. Once the Kubernetes deployment is complete, the next step is to deploy VMware Telco Cloud Service Assurance, refer the Start the VMware Telco Cloud Service Assurance deployment.
    Note: If the Kubernetes deployment fails, while waiting for the nodelocaldns PODs to come up then Kubernetes installation script should be re-run. The Kubernetes deployment will resume from that point.