This topic lists the standard VM Template configuration for VM Based VMware Telco Cloud Service Assurance Deployment and the prerequisites for deploying in VMs with Native Kubernetes (VM Based Deployment for Demo Footprint (Local PV) and Production Footprint (VMware vSAN Datastore or Block Storage). It also describes roles and privileges, prerequisites and YUM Server deployment required for deploying VMware Telco Cloud Service Assurance in VMs with Native Kubernetes for demo footprint (with local PV) and production footprint (VMware vSAN or Block Storage).
Configurations/Packages | Deployment Host VM | ControlPlane Node VM | WorkerNode VM | Remote Collector VM |
---|---|---|---|---|
Pure V4 or Dual Stack | Yes | Yes | Yes | Yes |
OS Firewall Disabled | Yes | Yes | Yes | No |
Crypto Policy on the OS set to Default | No | Yes | Yes | No |
kubectl package | Yes | No | No | No |
podman-docker package | Yes | No | No | No |
docker-CE package (Fresh Deployment) | No | No | No | Yes |
docker-CE package (Upgrade) | Yes | No | No | Yes |
yq package | Yes | No | No | No |
jq package | Yes | No | No | No |
SE Linux set to (Disabled or Permissive) | Yes | Yes | Yes | Yes |
Curl package | Yes | No | No | Yes |
Net-SNMP package | No | Yes | No | No |
Enable Disk UUID Flag (disk.EnableUUID=True) | No | Yes | Yes | No |
Roles and Privileges
Roles and Privileges Required for Creating the VMs
An administrator role or an equivalent administrator role of VMware vCenter is required for creating the VMs.
- For roles and privileges, follow the procedure mentioned in the vSphere Roles and Privileges section of Preparing for Installation of vSphere Container Storage Plug-in.
Roles and Privileges for Deploying VMware Telco Cloud Service Assurance
For the VMware Telco Cloud Service Assurance deployment, you do not require any VMware vCenter access.
Prerequisites for Cluster VMs
- Interface names for both Control Plane Nodes and Worker Nodes should be same. For example, if the Control Plane Nodes have interface names as ens192 then Worker Nodes should also have the same name ens192.
- VM based VMware Telco Cloud Service Assurance deployment supports IPV4 for Control Plane Node and Worker Node in the cluster VMs.
- VM based deployment does not support hostname/FQDN for Control Plane Node and Worker Node while deploying Kubernetes cluster (IP address must be provided for all cluster nodes specified in
vars.yml
file rather than the hostname or FQDN). - You need to have three static IP addresses reserved for the VMBased Cluster and VMware Telco Cloud Service Assurance Deployment:
- Harbor IP Address
- VMware Telco Cloud Service Assurance UI
- Kafka Edge IP
All the above static IP address must be in the same subnet as that of the Cluster Node VMs (Control and WorkerNodes).
- The supported version of Python for VMware Telco Cloud Service Assurance 2.4 is 3.6.8. If there are other versions of Python installed on the Cluster VMs, then those needs to be uninstalled.
- The default FORWARD Policy needs to be ACCEPT on all the Control Node VMs.
Command to set the Forward Polcy to accept:
iptables -P FORWARD ACCEPT
- Ensure libsemanage library is installed on all Cluster node VMs
Command to verify if the package is installed:
yum list installed | grep libsemanage
- Virtual Machines (VMs) with the following specifications for creating the Kubernetes cluster.
- Supported OS: Oracle Linux 8.x and RHEL Linux 8.x
- Resources:
- For Demo Footprint, refer the System Requirements for Demo Footprint.
- For production footprint, refer the VMware Telco Cloud Service Assurance Sizing Sheet.
The following table shows the local disk storage requirement's for the Control Node and the Worker Nodes for VMBased Production Deployment.
VM Based Deployment Production Footprint (LocalDisk Storage requirements) Control Node
70 GB of available local harddisk
<storage_dir>
where the VMware Telco Cloud Service Assurance application will be installed (specified during the Kubernetes Install), must have a minimum space of 25 GB.The
/var/log
partition directory must have a minimum of 8 GB of free space. The/var
partition directory must have a minimum of 5 GB of free space in addition to 8 GB of free space required for/var/log
directory.The
/usr
directory must have a minimum of 8 GB of free space.The/tmp
partition directory must have a minimum of 16 GB of free space.Note: The application pod logs will be stored in the/var/log
directory. Third party utilities, which are required for the Kubernetes Installation will be installed under/var
and/usr
directories. The above free space is required for storing the VMware Telco Cloud Service Assurance application data alone. Operating System related data is not considered in the above free space.Worker Node 250 GB of available local harddisk
<storage_dir>
where the VMware Telco Cloud Service Assurance application will be installed (specified during the Kubernetes Install), must have a minimum space of 200 GB.The
/var/log
partition directory must have a minimum of 8 GB of free space. The/var
partition directory must have a minimum of 5 GB of free space in addition to 8 GB of free space required for/var/log
directory.The
/usr
directory must have a minimum of 8 GB of free space.The/tmp
partition directory must have a minimum of 16 GB of free space.Note: The application pod logs will be stored in the/var/log
directory. Third party utilities, which are required for the Kubernetes Installation will be installed under/var
and/usr
directories. The above free space is required for storing the VMware Telco Cloud Service Assurance application data alone. Operating System related data is not considered in the above free space.
root
user access.- Firewall must be deactivate on the deployment host and on all the Cluster VMs.
- Ensure connectivity exists between the YUM repository server and cluster VMs.
- On each VM of the cluster, ensure that update-crypto-policies are set to DEFAULT. To check the status of update-crypto-policies, running the following command:
$ update-crypto-policies --show. If the value is not DEFAULT, then set the value to DEFAULT and reboot the VM.
$ sudo update-crypto-policies --set DEFAULT setting system policy to DEFAULTNote: System-wide crypto policies are applied on application start-up.It is recommended to restart the system for the change of policies to fully take place. - Python version 3.6.8 must be installed on all the VMs.
- For VMware vSAN datastore or Block Storage, all the VMs must be on the same VMware vSAN datastore or Block Storage, including the VMs that form the Kubernetes control plane and worker nodes. Currently, other storage types are not supported.
- Ensure you have the IP address and login credentials for the VMs. Same credentials are used on all the VMs.
Note: Ensure that all the nodes have the static IP address or the IP address MAC binding so that the IP addresses do not change across restarts.
- All VMs must be time synchronized through NTP services.
- The Kubernetes deployment requires a local YUM server from where the dependent libraries required for Kubernetes Cluster deployment will be installed. For more information on YUM server creation on Oracle, refer Steps to Create the Oracle Linux Yum Repository Server. For more information on YUM server creation on RHEL, refer Steps to Create the RHEL YUM Repository Server.
Note: If you already have a local YUM server deployed, then the same server can be used during the Kubernetes Cluster deployment.
- Prepare and configure the cluster VMs for YUM repository, crypto policy, installing libraries, and so on required for Kubernetes cluster deployment. For more information on Oracle VMs, refer Configuring Oracle Node VMs for Kubernetes Cluster Deployment. For more information on RHEL VMs, refer Configuring RHEL Node VMs for Kubernetes Cluster Deployment.
Prerequisites for Deployer Host
- Ensure that you have all necessary prerequisites specified in the Deployment Prerequisites section.
- SSH connectivity to the cluster VMs.
- Internet access to download the Deployment Container, K8s Installer, and VMware Telco Cloud Service Assurance deployer from My Downloads.
Note: If the deployment host does not have internet connectivity, refer the DarkSite Deployment procedure given in the respective footprint deployments.