This topic lists the roles and privileges, prerequisites and YUM Server deployment required for deploying VMware Telco Cloud Service Assurance in VMs with Native Kubernetes for demo footprint (with local PV and vSAN) and production footprint (vSAN).

Roles and Privileges

Roles and Privileges Required for Creating the VMs

An administrator role or an equivalent administrator role of VMware vCenter is required for creating the VMs.

Roles and Privileges for Deploying the Kubernetes Cluster
Note: This step is required only for vSAN VM Based Deployment and not Local PV based VM Based deployment.
For Kubernetes cluster deployment, you must have access to the VMware vCenter administrator role or an equivalent administrator role. If you do not have the administrator role or an equivalent administrator role, then you must create a VMware vCenter user with the below roles and privileges.

Roles and Privileges for Deploying the VMware Telco Cloud Service Assurance

For the VMware Telco Cloud Service Assurance deployment, you do not require any VMware vCenter access.

Prerequisites

Specifications for Cluster VMs
  1. Virtual Machines (VMs) with the following specifications for creating the Kubernetes cluster.
    1. Supported OS: Oracle Linux 8.x
    2. Resources:
      • For Demo Footprint, refer the System Requirements for Demo Footprint.
      • For production footprint, refer the VMware Telco Cloud Service Assurance Sizing Sheet.
      • Local Storage:
        • Ensure that partition/directory where the VMware Telco Cloud Service Assurance application will be deployed, for example: /mnt (later specified during the VMware Telco Cloud Service Assurance application deployment) should have the following storage capacity:
          • Remote PV VM Based deployment: 200Gb
          • Local PV VM Based deployment: 600 Gb
    3. root user access.
    4. Docker version 23.0.0 must be installed on all the VMs.
    5. Python version 3.6.8 must be installed on all the VMs.
    6. For vSAN datastore, all the VMs must be on the same vSAN datastore, including the VMs that form the Kubernetes control plane and worker nodes. Currently, other storage types are not supported.
    7. Ensure you have the IP address and login credentials for the VMs. Same credentials are used on all the VMs.
      Note: Ensure that all the nodes have the static IP address or the IP address MAC binding so that the IP addresses do not change across restarts.
    8. All VMs must be time synchronized through NTP services.
  2. Specification for Deployer Host:
    • SSH connectivity to the cluster VMs.
    • The Docker version 23.0 must be installed.
    • The network connectivity to the VMs.
    • At least 80 GB of disk space to download and extract the installer bundle.
    • Deployment host must be time synchronized through NTP services.
    • Internet access to download the Deployment Container, K8s Installer, and VMware Telco Cloud Service Assurance deployer from VMware Customer Connect.
      Note: If the deployment host does not have Internet connectivity, follow the steps described in the DarkSite Deployment host section to obtain the artifacts.

Deployment of YUM Repository Server

The Kubernetes deployment requires a local YUM server, from where the dependent libraries required for Kubernetes Cluster deployment will be installed. For more information on YUM server creation, refer Steps to Create the Oracle Linux Yum Repository Server.

Note: If you already have a local YUM server deployed, then the same server can be used during the Kubernetes Cluster deployment.