This section lists the deployment requirements for Demo Footprint

The following table shows the infrastructure requirements for each deployment platform in terms of CPU, RAM, and the expected number of nodes. Validation of available resources must be performed before installation to ensure that the required capacity is in place. In addition to CPU and RAM requirements, the table also provides information on the expected storage requirements. Storage requirements are calculated based on footprint size and desired raw metric retention period. Use the storage size values for the amount of storage that is expected for persistent data. Persistent volumes are allocated for different services with the backing of persistent storage supported by each of the deployment platforms. Finally, the table also provides some high-level operation metrics that can be used as guidance to determine the suitability supported by each of the given footprints.
Note: By default the Demo footprint supports maximum of five collectors. If one Smarts Integration is added, then three collectors gets created automatically, one collector each for topology, notification, and metrics respectively. Additionally, two more collectors can be configured. If you want to add more than five collectors then additional VM needs to be added as a workernode to the workload cluster.
Number of Managed Devices (includes routers, switches, hosts, VMs, CNFs, or pods) Demo (750 devices)
TKG Management Cluster Control Plane Node/Worker Node 3 VMs
2 vCPUs per VM
Note: Recommended to have 2.5 GHz CPU reservation per VM.
4 GB RAM per VM
Note: Recommended to have 4 GB RAM reservation per VM.
50 GB local disk
Standalone TKG Workload Cluster or TKG Workload Cluster deployed via TCA. Control Plane Node 3 VMs.
4 vCPUs per VM
Note: Recommended to have 2.5 GHz CPU reservation per VM.
16 GB RAM per VM
Note: Recommended to have 16 GB RAM reservation per VM.
50 GB local disk
Worker Node 4 VMs
16 vCPUs per VM
Note: Recommended to have 24 GHz CPU reservation per VM.
32 GB RAM per VM
Note: Recommended to have 32 GB RAM reservation per VM.
100 GB local disk (shared datastore required for all worker nodes VMs is 1.5Tb)
AKS Workload Cluster Worker Node 4 VMs
16 vCPUs per VM
32 GB RAM per VM
100 GB Local Disk
VM Based Deployment Control Plane Node 1 VM
4 vCPUs per VM
Note: Recommended to have 2.5 GHz CPU reservation per VM.
16 GB RAM per VM
Note: Recommended to have 16 GB RAM reservation per VM.
70 GB local harddisk

storage_dir where the VMware Telco Cloud Service Assurance application will be installed (specified during the Kubernetes Install), must have a minimum of 25 GB of free space.

The /var/log partition directory must have a minimum of 8 GB of free space.

The /var partition directory must have a minimum of 5 GB of free space in addition to 8 Gb free space required for /var/log.

The /usr directory must have a minimum of 8 GB of free space.

The /tmp partition directory must have a minimum of 16 GB of free space.

Note: The application pod logs will be stored in the /var/log directory. Third party utilities, which are required for the Kubernetes Installation will be installed under /var and /usr directories. The above free space is required for storing the VMware Telco Cloud Service Assurance application data alone. Operating System related data is not considered in the above free space.

Worker Node

6 VMs

16 vCPUs per VM
Note: Recommended to have 24 Ghz CPU reservation per VM.
32 GB RAM per VM
Note: Recommended to have 32 GB RAM reservation per VM.

6 VMs, each VM having 650 GB of local disk space.

storage_dir where the VMware Telco Cloud Service Assurance application will be installed (specified during the Kubernetes Install), must have a minimum of 600 GB of free space.

The /var/log partition directory must have a minimum of 8 GB of free space. The /var partition directory must have a minimum of 5 GB of free space in addition to 8 GB of free space required for /var/log.

The /usr directory must have a minimum of 8 GB of free space.

The /tmp partition directory must have a minimum of 16 GB of free space.

Note: The application pod logs will be stored in the /var/log directory. Third party utilities, which are required for the Kubernetes Installation will be installed under /var and /usr directories. The above free space is required for storing the VMware Telco Cloud Service Assurance application data alone. Operating System related data is not considered in the above free space.
Number of devices 750
Total number of metrics every five minutes 300 K
Total number of active events 8 K
Number of concurrent Northbound API calls 10
Number of concurrent users 10
Bandwidth utilization for Storage Traffic 3.5 Mbps
Total Disk IOPS (Read + Write) 100
Number of Alarms/Analytics definitions 1
Remediation Rule 1
Maximumu number of collectors supported (including topology, notification and metric collectors). 5
Note: The following notes apply to Demo footprints:
  • The Demo footprint is for non-production deployment without HA capabilities and cannot be upgraded and flexibly scaled. Also, it does not support the backup and restore feature.

  • The Demo Footprint System requirements table given above shows the VMware Tanzu Kubernetes Grid management sizing for deployments where a dedicated TKG management cluster is used for the TKG workload cluster. To size deployments when multiple workload clusters are managed by a single management cluster, see VMware Tanzu Kubernetes Grid documentation.
  • vSphere HA in vSphere must be enabled on which VMware Tanzu Kubernetes Grid management and workload cluster or VMs with Native Kubernetes (VM Based) are deployed.

  • In AKS, by default, first three worker nodes will be the control plane nodes.
  • Its recommended not to run the Demo footprint deployments for more than a month.
  • IPv6 is not supported for VMware Telco Cloud Service Assurance deployment.
  • VM Snapshots are not supported for restoring the VMware Telco Cloud Service Assurance setup.