Note:
  • If the deployment host does not have Internet connectivity to download the required files from My Downloads, follow the steps described in the DarkSite Deployment for Kubernetes section to get the files onto the deployment host. Then proceed with the remaining steps in this section (from Steps 5 to Step 14).
  • If the harbor password has changed then you must redeploy VMware Telco Cloud Service Assurance for the VMware Telco Cloud Service Assurance applications to run without failing. For more information, see Procedure to Redeploy If The Harbor Credentials are Changed.

Procedure

  1. Log in to the deployment host.
  2. Ensure that podman-docker are installed on the Deployer Host.
    Note:

    Ignore the following message and warnings while executing the docker command.

    Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
  3. Download the tar.gz file of the deployment container from My Downloads onto the deployment host under the home directory. The package is named as VMware-Deployment-Container-<VERSION>-<TAG>.tar.gz. For example, VMware-Deployment-Container-2.4.2-497.tar.gz.
    Note: To verify the downloaded package, run the following command on your deployment host.
    $ sha256sum VMware-Deployment-Container-<VERSION>-<BUILD_ID>.tar.gz
    This command displays the SHA256 fingerprint of the file. Compare this string with the SHA256 fingerprint provided next to the file in the My Downloads site and ensure that they match.
  4. Steps to load tar.gz file to local registry.
    # On deployment host
    $ docker load -i <dir/on/deployment host>/VMware-Deployment-Container-2.4.2-497.tar.gz
    
    Verify the deployment container image
    
    # On deployment host
    $ docker images
  5. Download the K8s Installer from My Downloads onto the deployment host under the home directory. Typically this package is named as VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz. For example, VMware-K8s-Installer-2.3.3-5.tar.gz.
    Note: To verify the downloaded package, run the following command on your deployment host.
    $ sha256sum VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz
    This command displays the SHA256 fingerprint of the file. Compare this string with the SHA256 fingerprint provided next to the file in the My Downloads download site and ensure that they match.

    Always consistently extract the K8s Installer within the /root directory.

  6. Extract the K8s Installer as follows. This creates a folder called k8s-installer under the home directory.
    $tar -xzvf VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz
    Note: Always consistently extract the K8s Installer within the /root directory.
  7. Navigate to the k8s-installer directory and verify that there are 2 directories named scripts and cluster.
    Note: By default, the Kubernetes install logs are stored under $HOME/k8s-installer/ansible.log. If you want to change the log location, then update the log_path variable in the file $HOME/k8s-installer/scripts/ansible/ansible.cfg.
  8. Export DOCKER_HOME and launch the deployment container as follows:
    export DOCKER_HOME=$(docker run --rm localhost/deployment:2.4.2-497 sh -c 'echo $HOME')
    docker run \
        --rm \
        -v $HOME:$DOCKER_HOME \
        -v $HOME/.ssh:$DOCKER_HOME/.ssh \
        -v /var/run/podman/podman.sock:/var/run/podman.sock \
        -v $(which podman):/usr/local/bin/podman:ro \
        -v /etc/docker:/etc/docker:rw \
        -v /opt:/opt \
        --network host \
        -it localhost/deployment:2.4.2-497 \
        bash
  9. Update the deployment parameters by editing /root/k8s-installer/scripts/ansible/vars.yml file inside the Deployment Container.
    1. Configure the general parameters.
      Note: Set the values according to your environment.
      cluster_name: <your-cluster-name>  # Unique name for your cluster
      ansible_user: <your-SSH-username>    # SSH username for the VMs
      ansible_become_password: <your-password>  # SSH password for the VMs
      Update the parameter admin_public_keys_pathwith the path of public key generated during SSH key generation.
      admin_public_keys_path: /root/.ssh/id_rsa.pub # Path to the SSH public key. This will be a .pub file under $HOME/.ssh/
      Update the control_plane_ips and worker_node_ips as specified in the following format.
      Note: For Demo footprint, refer the System Requirements for Demo Footprint section to get the number of Control Nodes and Worker Node VMs.
      control_plane_ips: # The list of control plane IP addresses of your VMs.This should be a YAML list.
       - <IP1>
       - <IP2>
      worker_node_ips: # The list of worker nodes IP addresses of your VMs.This should be a YAML list.
       - <IP3>
       - <IP4>
    2. Update the Deployment Host IP and the YUM server details.
      ## Deployment host IP address
      ## Make sure firewall is disabled in deployment host
      # The IP address of your deployment host
      deployment_host_ip: <your-deployment-host-ip>  
      ## default value is http. Use https for secure communication.
      yum_protocol: http
      # The IP address/hostname of your yum/package repositoy
      yum_server: <your-yum-server-ip>
      
    3. Keepalived vip is used for internal container registry HA. You must set it to an available virtual IP if a default Keepalived vip is not available.
      keepalived_vip: "192.168.1.101"
      Note: If the default IP given in the vars.yaml file is not available, you must use the available IP in 192.168.*.* subnet range.
    4. For Harbor Container Registry, uncomment and update the harbor_registry_ip parameter with the selected static IP address.
      ### Harbor parameters ###
      ## The static IP address to be used for Harbor Container Registry
      ## This IP address must be in the same subnet as the VM IPs.
      harbor_registry_ip: <static-IPAddress>
    5. Set the following parameter to a location that has sufficient storage space for storing all application data.
      Note: Ensure that in the below example, the /mnt file system must have 600 GB of storage space and must have 744 permission.
      For example:
      storage_dir: /mnt
    6. For local PV, set the storage parameters to false if set to true.
      ### Storage related parameters ###
      # use_external_storage: false
      # install_vsphere_csi: false
    7. For local PV, comment the following VMware vCenter parameters if it is enabled.
      ### vCenter parameters for using vSAN storage ###
      # vcenter_ip: <your-vCenter-IP>
      # vcenter_name: <your-vCenter-name>
      # vcenter_username: <your-vCenter-username>
      
      # for a complex password i.e. passwords that allow special characters like '{' or '%', please define the password as "vcenter_password: !unsafe '<password>'" to avoid any templating error during execution
      # for further details, please refer: https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_advanced_syntax.html
      #vcenter_password:
      
      ## List of data centers that are part of your vSAN cluster
      # vcenter_data_centers:
      #    - <DataCenter> 
      # vcenter_insecure: true # True, if using self signed certificates
      ## The datastore URL. To locate, go to your vCenter -> datastores -> your datastore -> Summary -> URL
      ## Note: All VMs must be on the same datastore!
      # datastore_url: <your-datastore-url>
    Here is the sample snippet of the vars.yaml file:
    cluster_name: vmbased-localpvtest
    ansible_user: root
    ansible_become_password: dangerous
    admin_public_keys_path: /root/.ssh/id_rsa.pub
    control_plane_ips:
      - 10.214.174.195
    worker_node_ips:
      - 10.214.174.206
      - 10.214.174.184
      - 10.214.174.52
      - 10.214.174.217
      - 10.214.174.129
      - 10.214.174.215
    ## Deployment host IP address
    ## Make sure firewall is disabled in deployment host
    deployment_host_ip: 10.1.1.1
    ## default value is http. Use https for secure communication.
    yum_protocol: http
    ## IP address/hostname of yum/package repo
    yum_server: 10.198.x.x
    keepalived_vip: "192.168.1.101"
    ### Harbor parameters ###
    ## (Optional) The IP address to be used for the Harbor container registry, if static IPs are available.
    ## This IP address must be in the same subnet as the VM IPs.
    harbor_registry_ip: 10.214.174.x
    ## When using local storage (Direct Attached Storage), set this to a location that has sufficient storage space for storing all application data
    storage_dir: /mnt
    ### Storage related parameters ###
    # use_external_storage: false
    # install_vsphere_csi: false
    ### vCenter parameters for using external storage (VMFS or vSAN datastores) ###
    # vcenter_ip:
    # vcenter_name:
    # vcenter_username:
    # vcenter_password:
    ## List of data centers that are part of your cluster
    # vcenter_data_centers:
    #    - <DataCenter>
    # vcenter_insecure:
    ## The datastore URL. To locate, go to your vCenter -> datastores -> your datastore -> Summary -> URL
    ## Note: All VMs must be on the same datastore!
    # datastore_url:
  10. Execute the prepare command inside the Deployment Container.
    Note: If you have used Non-Empty Passphrase for SSH Key generation (required for passwordless SSH communication), then you must execute the following commands inside the Deployment Container, before running the Ansible script.
    [root@wdc-10-214-147-149 ~]# eval "$(ssh-agent -s)"
    Agent pid 3112829
    [root@wdc-10-214-147-149 ~]# ssh-add ~/.ssh/id_rsa
    Enter passphrase for /root/.ssh/id_rsa: <==Enter the NON-EMPTY 
    Passphrase that is being provided during the NON-EMPTY ssh-key Generation process
    Identity added: /root/.ssh/id_rsa ([email protected])
    
    root [ ~ ]# cd /root/k8s-installer/
    root [ ~/k8s-installer ]# export ANSIBLE_CONFIG=/root/k8s-installer/scripts/ansible/ansible.cfg  LANG=en_US.UTF-8
    root [ ~/k8s-installer ]# ansible-playbook scripts/ansible/prepare.yml -e @scripts/ansible/vars.yml --become
    
    Note: There will be some fatal messages which will be displayed on the console and ignored by the Ansible script during execution. These messages does not have any functional impact and can be safely ignored.
  11. Execute the Kubernetes cluster installation command inside the Deployment Container.
    root [ ~ ]# cd /root/k8s-installer/
    root [ ~/k8s-installer ]# ansible-playbook  -i inventory/<your-cluster-name>/hosts.yml scripts/ansible/deploy_caas.yml -u <your-SSH-username> --become -e @scripts/ansible/internal_vars.yml -e @scripts/ansible/vars.yml
    Note:
    • There will be some fatal messages which will be displayed on the console and ignored by the Ansible script during execution. These messages does not have any functional impact and can be safely ignored.
    • After the Kubernetes cluster installation completes, kubeconfig file is generated under /root/.kube/<your-cluster-name>. Export the kubeconfig file using export KUBECONFIG=/root/.kube/<your-cluster-name> and proceed with the following steps to ensure if the deployment is successful.
  12. Ensure that the Kubernetes installation is successful and the following successful message is displayed on the console CaaS deployment is successful.
    kubectl get nodes
    Note: Ensure that all the nodes are in ready state before starting the VMware Telco Cloud Service Assurance deployment.
  13. Verify the Harbor pods are up and running.
    kubectl get pods -A | grep harbor
    Note:
    • If the Kubernetes deployment fails, while waiting for the nodelocaldns PODs to come up then Kubernetes installation script must be re-run. The Kubernetes deployment will resume from that point.
    • If the Kubernetes Deployment fails because the Python HTTP Server is getting killed every time, then refer to Python HTTP server running on the Deployment Host is stopped in the VMware Telco Cloud Service Assurance Troubleshooting Guide to manually bring up the HTTP Server.
  14. After the Kubernetes deployment is complete, the next step is to create a new user in Harbor. For more information, see the Create a New User in Harbor section.
  15. After creating a new user in Harbor, the next step is to deploy VMware Telco Cloud Service Assurance. For more information, see the Deployment for Demo Footprint with Local PV section.