A VMware Telco Cloud Operations deployment has a of a set of virtual machines (VMs) deployed on VMware ESXi hypervisors. It is recommended, but not required, that the ESXi infrastructure on which the VMware Telco Cloud Operations is deployed be managed by VMware vCenter.

Role for Each Virtual Machine

Each virtual machine (or node) in the cluster must have one of the following roles.
Virtual Machine Node Description
control-plane-node The node that hosts the user interface, and typically has lower CPU and memory requirements than other nodes,
elasticworker The node that hosts the main data store for the cluster and generally has larger storage requirements than other nodes.
arangoworker The node that hosts the topology store for the cluster.
kafkaworker The node that hosts the main data bus for the cluster.
domainmanagers The node that hosts the data collectors for the cluster.

Hardware Requirements

A VMware Telco Cloud Operations deployment requires that sufficient virtual CPU, memory, and disk space be available to support the deployment of the desired VMware Telco Cloud Operations footprint. It is recommended, but not required, that each of the VMware Telco Cloud Operations VMs be deployed on a different ESXi for optimal performance.

The recommendation is that each ESXi is provisioned with additional headroom beyond the minimum required CPU and RAM to host the virtual machines. The recommendations are one additional CPU and 6 additional GBs.

Footprint Specification

The following tables specify the requirements for deploying different footprints. The number of VMs required for each type of node is provided. This is particularly important when High Availability is required. The following tables provides the number of virtual CPUs, main memory (RAM) and the total disk size for each virtual machine (node).

Note: Before you begin deployment, make sure you have adequate resources to support deploying the footprint you choose. For correct operation, the number of virtual machines of each type must be deployed and sized according to the specifications in the table.
Steps:
  1. Choose a footprint from the tables listed below.
  2. Deploy the number of VMs of each type as listed in the Number of VMs column . Follow the Manual Deployment Process or use the Automated Deployment Process.
  3. After deployment the OVA each VM, verify the VM hardware specifications match the table below for the footprint chosen. The value under Hard Disk 3 Size indicates the size of the data disk on each virtual machine and is included in the Total Disk Size value.
    • If deploying VMware Telco Cloud Operations manually, you must configure the number of CPUs, memory, and only resize the Hard Disk 3 of the VM to match the values in the following tables. Refer to the Manual Deployment Process process for configuration steps.
    • If using the Automated Deployment Process, the configuration is done by the tool and no action is required.

Non-High Availability Footprints

2.5 K Footprint

Node Number of VMs CPU per VM (GB) RAM per VM (GB) Total Disk Size per VM (GB) Hard Disk 3 Size per VM (GB)
Control Plane Node (control-plane-node) 1 4 17 364 100
elasticworker 1 3 25 1,188 1,024
arangoworker 1 4 25 264 100
kafkaworker 1 4 25 264 100
domainmanager 1 3 29 264 100
Total for all VMs 5 18 121 2,344

25 K Footprint

Node Number of VMs CPU per VM RAM per VM (GB) Total Disk Size per VM (GB) Hard Disk 3 Size per VM (GB)
Control Plane Node (control-plane-node) 1 4 17 364 100
elasticworker 1 8 25 10,404 10,240
arangoworker 1 12 35 264 100
kafkaworker 1 12 35 464 300
domainmanager 1 12 33 464 300
Total for all VMs 5 48 145 11,960

High-Availability Footprints

2.5 K High Availability
Node Number of VMs CPU per VM RAM per VM (GB) Total Disk Size per VM (GB) Hard Disk 3 Size per VM (GB)
Control Plane Node (control-plane-node) 1 4 17 364 100
elasticworker 3 3 25 1,188 1,024
arangoworker 1 4 25 264 100
kafkaworker 3 4 25 264 100
domainmanager 2 3 29 264 100
Total for all VMs 10 35 250 5,512

25 K High Availability

Node Number of VMs CPU per VM RAM per VM (GB) Total Disk Size per VM (GB) Hard Disk 3 Size per VM (GB)
Control Plane Node (control-plane-node) 1 4 17 364 100
elasticworker 3 8 25 10,404 10,240
arangoworker 1 12 35 264 100
kafkaworker 3 12 35 464 300
domainmanager 2 12 33 464 300
Total for all VMs 10 100 298 34,160
50 K High Availability
Node Number of VMs CPU per VM RAM per VM (GB) Total Disk Size per VM (GB) Hard Disk 3 Size per VM (GB)
Control Plane Node (control-plane-node) 1 4 17 364 100
elasticworker 3 16 37 20,644 20,480
arangoworker 2 22 55 264 100
kafkaworker 3 22 67 764 600
domainmanager 3 14 33 764 600
Total for all VMs 12 204 538 66,380
100 K High Availability
Node Number of VMs CPU per VM RAM per VM (GB) Total Disk Size per VM (GB) Hard Disk 3 Size per VM (GB)
Control Plane Node (control-plane-node) 1 4 17 364 100
elasticworker 3 16 37 20,644 20,480
arangoworker 3 22 55 264 100
kafkaworker 3 22 67 764 600
domainmanager 4 14 33 764 600
Total for all VMs 14 240 626 67,480

Each footprint is sized for the volume of metrics expected based on a number of dimensions, including devices being managed, number of VMware Telco Cloud Operations collectors configured, number of external collectors, or systems using the Gateway service for data ingestion into VMware Telco Cloud Operations, among others.

Larger footprint options require additional resources and processing compared to smaller footprints. One footprint property that needs to be considered is the Available Slots property, which is the number of parallel slots that are available for performing streaming operations. Each KPI, Threshold, Alert, and/or Enrichment stream that is deployed will make use of the available slots, and each higher footprint requires more slots to accomplish the processing of the higher volumes of data.

Performance and Scalability for Different Deployments

The following table provides the scale parameters for different deployments of VMware Telco Cloud Operations.
Footprint Extra Small (HA/Non-HA) 2.5 K Small (HA/Non-HA) 25 K Small-Medium (HA) 50 K Medium (HA) 100K
Number of Metrics/5 min (in million) 1 million 10 million 20 million 40 million
Number of VeloCloud vEdges 300 3 K 6.5 K 6.5 K
Number of Viptela vEdges 100 2 K 4 K 4 K
Number of raw metrics from (Smarts metric collector and VeloCloud/Viptela collector) 570 K 5.9 million 12 million 23 million
Number of metrics from M&R Gateway (Flows + Metrics) 400 K 4 million 7.7 million 13 million
Number of Traffic Flows (native) 1 K 1.5 K 2.5 K 5 K
Number of SAMs supported 1 4 8 16
Number of notifications processed per second 200 250 300 350

vCenter

It is recommended that one or more VMware Telco Cloud Operations ESXis and VMs are managed by vCenter. Additional resources might be required to host a vCenter when one is not already available.
Note: The VMware Telco Cloud Operations automated deployment tool requires that the target ESXi infrastructure is managed by vCenter.

Web Browsers

The following web browsers are supported:

Browser Version
Google Chrome 87 or later
Mozilla Firefox 68 or later

Software

The following software is a list of supported versions:
  • ESXi 6.7 and 7.0
  • vCenter 6.7 and 7.0

Network

The following describes networking requirements and recommendations for the VMware Telco Cloud Operations deployment.
Networking Description
Network Connectivity Connectivity is between vCenter and ESXi(s)
IP Addresses Use IPv4 addressing. IPv6 addressing is not supported.
Host Topology

It is strongly recommended to create a cluster and add all ESXi(s) to it (use vSphere HA and other cluster-wide features.

Deployment to ESXi(s) not in the cluster are supported.

Placement of all ESXi(s) should be either in a cluster or all out of the cluster.

Virtual Machine Deployment (based on topology)

1. By specifying only the cluster name, vSphere determines the ESXi to deploy each VM to.

2. By specifying only the ESXi IP addresses. Two possibilities:

- If ESXi(s) are in a cluster, each VM is deployed to the specified ESXi; however, if DRS is turned on, then vSphere determines the ESXi.

- If ESXi(s) are not in a cluster, the VM is deployed to the specified ESXi.

Storage
  • It is strongly recommended shared storage is accessible by ESXi(s) (required for vSphere HA and other cluster-wide features), for example, vSAN.
  • Directly attached local storage on each ESXi also supported
  • Datastores must be configured