A VMware Telco Cloud Operations deployment has a of a set of virtual machines (VMs) deployed on VMware ESXi hypervisors. It is recommended, but not required, that the ESXi infrastructure on which the VMware Telco Cloud Operations is deployed be managed by VMware vCenter.
Role for Each Virtual Machine
Each virtual machine (or node) in the cluster must have one of the following roles.
Virtual Machine Node |
Description |
control-plane-node |
The node that hosts the user interface, and typically has lower CPU and memory requirements than other nodes, |
elasticworker |
The node that hosts the main data store for the cluster and generally has larger storage requirements than other nodes. |
arangoworker |
The node that hosts the topology store for the cluster. |
kafkaworker |
The node that hosts the main data bus for the cluster. |
domainmanagers |
The node that hosts the data collectors for the cluster. |
Hardware Requirements
A VMware Telco Cloud Operations deployment requires that sufficient virtual CPU, memory, and disk space be available to support the deployment of the desired VMware Telco Cloud Operations footprint. It is recommended, but not required, that each of the VMware Telco Cloud Operations VMs be deployed on a different ESXi for optimal performance.
The recommendation is that each ESXi is provisioned with additional headroom beyond the minimum required CPU and RAM to host the virtual machines. The recommendations are one additional CPU and 6 additional GBs.
Footprint Specification
The following tables specify the requirements for deploying different footprints. The number of VMs required for each type of node is provided. This is particularly important when High Availability is required. The following tables provides the number of virtual CPUs, main memory (RAM) and the total disk size for each virtual machine (node).
Note: Before you begin deployment, make sure you have adequate resources to support deploying the footprint you choose. For correct operation, the number of virtual machines of each type must be deployed and sized according to the specifications in the table.
Steps:
- Choose a footprint from the tables listed below.
- Deploy the number of VMs of each type as listed in the Number of VMs column . Follow the Manual Deployment Process or use the Automated Deployment Process.
- After deployment the OVA each VM, verify the VM hardware specifications match the table below for the footprint chosen. The value under Hard Disk 3 Size indicates the size of the data disk on each virtual machine and is included in the Total Disk Size value.
- If deploying VMware Telco Cloud Operations manually, you must configure the number of CPUs, memory, and only resize the Hard Disk 3 of the VM to match the values in the following tables. Refer to the Manual Deployment Process process for configuration steps.
- If using the Automated Deployment Process, the configuration is done by the tool and no action is required.
Non-High Availability Footprints
2.5 K Footprint
Node |
Number of VMs |
CPU per VM (GB) |
RAM per VM (GB) |
Total Disk Size per VM (GB) |
Hard Disk 3 Size per VM (GB) |
Control Plane Node (control-plane-node) |
1 |
4 |
17 |
364 |
100 |
elasticworker |
1 |
3 |
25 |
1,188 |
1,024 |
arangoworker |
1 |
4 |
25 |
264 |
100 |
kafkaworker |
1 |
4 |
25 |
264 |
100 |
domainmanager |
1 |
3 |
29 |
264 |
100 |
Total for all VMs |
5 |
18 |
121 |
2,344 |
|
25 K Footprint
Node |
Number of VMs |
CPU per VM |
RAM per VM (GB) |
Total Disk Size per VM (GB) |
Hard Disk 3 Size per VM (GB) |
Control Plane Node (control-plane-node) |
1 |
4 |
17 |
364 |
100 |
elasticworker |
1 |
8 |
25 |
10,404 |
10,240 |
arangoworker |
1 |
12 |
35 |
264 |
100 |
kafkaworker |
1 |
12 |
35 |
464 |
300 |
domainmanager |
1 |
12 |
33 |
464 |
300 |
Total for all VMs |
5 |
48 |
145 |
11,960 |
|
High-Availability Footprints
2.5 K High Availability
Node |
Number of VMs |
CPU per VM |
RAM per VM (GB) |
Total Disk Size per VM (GB) |
Hard Disk 3 Size per VM (GB) |
Control Plane Node (control-plane-node) |
1 |
4 |
17 |
364 |
100 |
elasticworker |
3 |
3 |
25 |
1,188 |
1,024 |
arangoworker |
1 |
4 |
25 |
264 |
100 |
kafkaworker |
3 |
4 |
25 |
264 |
100 |
domainmanager |
2 |
3 |
29 |
264 |
100 |
Total for all VMs |
10 |
35 |
250 |
5,512 |
|
25 K High Availability
Node |
Number of VMs |
CPU per VM |
RAM per VM (GB) |
Total Disk Size per VM (GB) |
Hard Disk 3 Size per VM (GB) |
Control Plane Node (control-plane-node) |
1 |
4 |
17 |
364 |
100 |
elasticworker |
3 |
8 |
25 |
10,404 |
10,240 |
arangoworker |
1 |
12 |
35 |
264 |
100 |
kafkaworker |
3 |
12 |
35 |
464 |
300 |
domainmanager |
2 |
12 |
33 |
464 |
300 |
Total for all VMs |
10 |
100 |
298 |
34,160 |
|
50 K High Availability
Node |
Number of VMs |
CPU per VM |
RAM per VM (GB) |
Total Disk Size per VM (GB) |
Hard Disk 3 Size per VM (GB) |
Control Plane Node (control-plane-node) |
1 |
4 |
17 |
364 |
100 |
elasticworker |
3 |
16 |
37 |
20,644 |
20,480 |
arangoworker |
2 |
22 |
55 |
264 |
100 |
kafkaworker |
3 |
22 |
67 |
764 |
600 |
domainmanager |
3 |
14 |
33 |
764 |
600 |
Total for all VMs |
12 |
204 |
538 |
66,380 |
|
100 K High Availability
Node |
Number of VMs |
CPU per VM |
RAM per VM (GB) |
Total Disk Size per VM (GB) |
Hard Disk 3 Size per VM (GB) |
Control Plane Node (control-plane-node) |
1 |
4 |
17 |
364 |
100 |
elasticworker |
3 |
16 |
37 |
20,644 |
20,480 |
arangoworker |
3 |
22 |
55 |
264 |
100 |
kafkaworker |
3 |
22 |
67 |
764 |
600 |
domainmanager |
4 |
14 |
33 |
764 |
600 |
Total for all VMs |
14 |
240 |
626 |
67,480 |
|
Each footprint is sized for the volume of metrics expected based on a number of dimensions, including devices being managed, number of VMware Telco Cloud Operations collectors configured, number of external collectors, or systems using the Gateway service for data ingestion into VMware Telco Cloud Operations, among others.
Larger footprint options require additional resources and processing compared to smaller footprints. One footprint property that needs to be considered is the Available Slots property, which is the number of parallel slots that are available for performing streaming operations. Each KPI, Threshold, Alert, and/or Enrichment stream that is deployed will make use of the available slots, and each higher footprint requires more slots to accomplish the processing of the higher volumes of data.
Performance and Scalability for Different Deployments
The following table provides sample managed capacity for each footprint. Each noted footprint has been tested to handle all the noted total capacities on a single instance of VMware Telco Cloud Operations.
Footprint |
Extra Small (HA/Non-HA) 2.5 K |
Small (HA/Non-HA) 25 K |
Small-Medium (HA) 50 K |
Medium (HA) 100K |
Number of Metrics/5 min (in million) |
1 million |
10 million |
20 million |
40 million |
Number of raw metrics from (Smarts metric collector and VeloCloud/Viptela collector) |
250 K |
2.5 million |
7.5 million |
10 million |
Number of metrics from M&R Gateway (Flows + Metrics) |
500k |
5 million |
7.5 million |
10 million |
Number of Kafka Mapper Metrics |
250k |
2.5 million |
5 million |
20 million |
Number of Traffic Flows (native) |
1 K |
1.5 K |
2.5 K |
5 K |
Number of SAMs supported |
1 |
4 |
8 |
16 |
Number of notifications processed per second |
200 |
250 |
450 |
450 |
Total Number of Notifications |
15k |
105k |
205k |
500k |
Total Time taken to sync all notifications from SAM to VMware Telco Cloud Operations |
4 mins |
8 mins |
12 mins |
22 mins |
VeloCloud: Results Captured for 1 VCO with 6k vEdges in multitenant environment applicable for all footprints except 2.5k:
Measurements taken for VMware Telco Cloud Operations FM collector |
Values |
CPU consumed for discovery collector - Average |
1.5vCPU |
CPU consumed for monitoring collector - Average |
1.3vCPU |
MEM consumed for discovery collector - Peak |
1 GB |
MEM consumed for monitoring collector - Peak |
800 MB |
Bandwidth utilization for VMware Telco Cloud Operations Discovery collector node |
29Mbps |
Bandwidth utilization for VMware Telco Cloud Operations Monitoring collector node |
18Mbps |
Measurements taken for VMware Telco Cloud Operations PM collector |
Values |
CPU consumed by PM collector - Average |
0.5vCPU |
MEM consumed by PM collector - Peak |
0.75 GB |
Bandwidth utilization for VMware Telco Cloud Operations PM collector node |
7Mbps |
Measurements taken for Smarts ESM |
Values |
End to End Discovery Time |
2hr 6mins |
CPU consumed by ESM process - Average |
3vCPU |
MEM consumed by ESM process - Peak |
3GB |
Bandwidth utilization for Smarts VM |
39Mbps |
Vertical Scaling Limit |
# vEdges |
1 VCO - Maximum scale point for Single Tenant User |
4000 |
1 VCO - Maximum scale point for Multi Tenant MSP User |
4000 |
1 VCO - Maximum scale point for Multi Tenant Operator User |
6000 |
Topology Details Per VCO(# vEdges, # Interfaces, # NetworkConnection) |
6000 vEdges, 72000 Interfaces, 12000 NetworkConnections |
Note: Tests performed in a low latency environment < 1ms under below conditions, 1 ESM domain manager having 2 VCOs each with 6k vEdges (Total 12k vEdges).
SDWAN Solution |
Metrics Type |
Count |
Description |
Velocloud |
Number of Metrics per Vedge |
6 |
Metrics that applicable only to Vedge like cpu, mem etc |
|
Number of Metrics per Interface of Vedge |
26 |
Metrics that applicable only to each interfaces of Vedge like Rx, Tx etc |
Managed Entities (VEdges and Interfaces) |
Total Metrics |
Metric Per VCO with 6k Edges |
1908000(1.9M) |
Viptela: Results Captured for 1 vManage with 2k vEdges applicable to all Footprints except 2.5k:
Measurements taken for VMware Telco Cloud Operations FM collector |
Values |
CPU consumed for discovery collector - Average |
2.1vCPU |
CPU consumed for monitoring collector - Average |
1.9vCPU |
MEM consumed for discovery collector - Peak |
1 GB |
MEM consumed for monitoring collector - Peak |
750MB |
Bandwidth utilization for VMware Telco Cloud Operations Discovery collector node |
25Mbps |
Bandwidth utilization for VMware Telco Cloud Operations Monitoring collector node |
12Mbps |
Measurements taken for VMware Telco Cloud Operations PM collector |
Values |
CPU consumed by PM collector - Average |
2.5vCPU |
MEM consumed by PM collector - Peak |
1GB |
Bandwidth utilization for VMware Telco Cloud Operations PM collector node |
47Mbps |
Measurements taken for Smarts IP |
Values |
End to End discovery |
27mins |
CPU consumed by IP process - Average |
4vCPU |
MEM consumed by IP process - Peak |
15GB |
Bandwidth utilization for Smarts VM |
31Mbps |
|
|
Topology Details Per vManage(# vEdges, # Interfaces, # Tunnels) |
2k vEdges, 30k Interfaces, 60k Tunnels, 120k Tunnel Interfaces |
Note:
- Currently, it is not recommended to run multiple IP Domains managing Viptela topology in a single server/VM.
- Tests performed in a low latency environment < 1ms under below conditions. 1 IP domain manager having 2 vManages each with 2k vEdges (Total 4k vEdges).
SDWAN Solution |
Metrics Type |
Count |
Description |
Viptela |
Number of Metrics per Vedge |
15 |
Metrics that applicable only to Vedge like cpu, mem etc |
|
Number of Metrics per Interface of Vedge |
14 |
Metrics that applicable only to each interfaces of Vedge like Rx, Tx etc |
|
Number of Metrics per Tunnel Interface of Vedge |
21 |
Metrics that applicable only to each tunnel interfaces |
Managed Entities (VEdges, Interfaces and Tunnel Interfaces) |
Total Metrics |
Metrics Per vManage with 2k Edges |
2970000(2.97M) |
vCenter
It is recommended that one or more VMware Telco Cloud Operations ESXis and VMs are managed by vCenter. Additional resources might be required to host a vCenter when one is not already available.
Note: The VMware Telco Cloud Operations automated deployment tool requires that the target ESXi infrastructure is managed by vCenter.
Web Browsers
The following web browsers are supported:
Browser |
Version |
Google Chrome |
87 or later |
Mozilla Firefox |
68 or later |
Software
The following software is a list of supported versions:
- ESXi 6.7 and 7.0
- vCenter 6.7 and 7.0
Network
The following describes networking requirements and recommendations for the VMware Telco Cloud Operations deployment.
Networking |
Description |
Network Connectivity |
Connectivity is between vCenter and ESXi(s) |
IP Addresses |
Use IPv4 addressing. IPv6 addressing is not supported. |
Host Topology |
It is strongly recommended to create a cluster and add all ESXi(s) to it (use vSphere HA and other cluster-wide features. Deployment to ESXi(s) not in the cluster are supported. Placement of all ESXi(s) should be either in a cluster or all out of the cluster. |
Virtual Machine Deployment (based on topology) |
1. By specifying only the cluster name, vSphere determines the ESXi to deploy each VM to. 2. By specifying only the ESXi IP addresses. Two possibilities: - If ESXi(s) are in a cluster, each VM is deployed to the specified ESXi; however, if DRS is turned on, then vSphere determines the ESXi. - If ESXi(s) are not in a cluster, the VM is deployed to the specified ESXi. |
Storage
- It is strongly recommended shared storage is accessible by ESXi(s) (required for vSphere HA and other cluster-wide features), for example, vSAN.
- Directly attached local storage on each ESXi also supported
- Datastores must be configured