The Telco Cloud Infrastructure Tier is the foundation for the Telco Cloud and is used by all Telco Cloud derivatives including Telco Cloud Platform and Telco Cloud Platform RAN.

Figure 1. Telco Cloud Infrastructure Tier
Telco Cloud Infrastructure Tier

The infrastructure tier focuses on the following major objectives:

  • Provide a platform for creating a horizontal, disaggreated platform, supporting hardware and Network Functions from any vendor.

  • Provide the Software-Defined Telco Cloud Infrastructure

  • Provide the Software-Defined Storage

  • Provide the Software-Defined Networking (Telco Cloud Platform Advanced Only)

  • Implement the Virtual Infrastructure Manager (VIM)

Note:

VMware Integrated OpenStack (VIO) is no longer a VIM option with Telco Cloud Platform.

The physical tier focuses on infrastructure (servers and network switches), while the infrastructure tier focuses on the applications that form the Telco Cloud foundations.

In the Telco Cloud infrastructure tier, access to the underlying physical infrastructure is controlled and allocated to the management and network function workloads. The Infrastructure tier consists of hypervisors on the physical hosts and the hypervisor management components across the virtual management layer, network and storage layers, business continuity and security areas.

The Infrastructure tier is divided into pods, classified as domains such as Management Domain and Compute Domain.

Management Pod

The Management pod is crucial for the day-to-day operations and observability capabilities of the Telco Cloud. The management pod hosts the Management and Operational applications from all tiers of the Telco cloud. Applications such as Telco Cloud Automation that are part of the Platform tier reside in the management domain.

The following table lists the components associated with the entire Telco Cloud. Some components are specific to Telco Cloud Platform and VNF deployments (such as Telco Cloud Platform Essentials and VMware Cloud Director). Other components are exclusive to Telco Cloud Platform and RAN stacks for CNF deployments.

Component

Description

Management vCenter Server

Controls the management workload domain

Resource vCenter Servers

Controls one or more workload domains

Management NSX Cluster

(Telco Cloud Platform Advanced Only)

Provides SDN functionality for the management cluster such as overlay networks, VRFs, firewalling, and micro-segmentation.

Implemented as a cluster of three nodes

Resource NSX Clusters

(Telco Cloud Platform Advanced Only)

Provides SDN functionality for one or more resource domains such as overlay networks, routing and VRFs, Firewalling, and micro-segmentation

Implemented as a cluster of three nodes

VMware Cloud Director Cells (VIM)

Provides tenancy and control for VNF-based workloads

NSX Advanced Load Balancer management and edge nodes

(Telco Cloud Platform Advanced Only)

Load Balancer controller and service edges provide L4 load balancing and L7 ingress services.

NSX ALB is not a part of Telco Cloud Platform Essentials. It additional licensing in Telco Cloud Platform Advanced.

Aria Operations Cluster

Collects metrics and determines the health of the Telco Cloud

VMware Aria Operations™ for Logs Cluster

Collects logs for troubleshooting the Telco Cloud

VMware Telco Cloud Manager and Control-Plane Nodes

ETSI SOL-based NFVO, G-VNFM, and CaaS management platforms are used to design and orchestrate the deployment of Kubernetes clusters, and for onboarding and lifecycle management of CNFs.

Aria Operations for Networks

(Telco Cloud Platform Advanced Only)

Collects multi-tier networking metrics and flow telemetry for network-level troubleshooting

VMware Live Recovery™

Facilitates the implementation of BCDR plans

VMware Aria Automation Orchestrator™

Runs custom workflows in various programming or scripting languages

VMware Telco Cloud Service Assurance

(Telco Cloud Platform Advanced Only)

Performs monitoring and closed-loop remediation for 5G Core and RAN deployments

Telco Cloud Automation Airgap servers

Facilitates Tanzu Kubernetes Grid deployments in air-gapped environments with no Internet connectivity

Note:
  • For architectural reasons, some deployments might choose to deploy specific components outside of the Management cluster. This approach can be used for a distributed management domain as long as the required constraints are met, that is, 150ms latency between the ESXi hosts and various management components.

  • Each component has a set of constraints around latency and connectivity requirements. Additional components can be hosted in the Management Cluster. For example, Rabbit MQ or NFS VM is used by VMware Cloud Director. Additional elements such as Directory service and NTP can also be hosted in the management domain if required. Consider security and resource isolation when installing third-party components into the management domain.

To ensure a responsive management domain, do not oversubscribe the management domain in terms of CPU or Memory. To ensure that all components run within a single management domain in case of a failure in one management location, design considerations must also include management domain failover.

The management domain can be deployed in a single availability zone (single rack), across multiple availability zones, or across regions in an active/active or active/standby site. The configuration varies according to the availability and failover requirements.

Workload / Compute Pods

The compute pods or Far/Near edge pods are used to host the Network functions. Some additional components from the management domain can be deployed in the workload domain. For example, Cloud Proxies for remote data collection from Aria Operations.

The workload domain is derived from one or more compute pods. A compute pod is synonymous with a vSphere cluster. The pod can include a minimum of two hosts (four when using vSAN) and a maximum of 96 hosts (64 when using vSAN). The compute pod deployment in a workload domain is typically aligned with a rack.

Note:

Configuration maximums can change. For information about current maximums, see VMware Configuration Maximums.

The only distinction between compute pods and far/near-edge pods is the pod size. The far/near edge pods are smaller and distributed throughout the network, whereas the compute pods are larger and co-located with the Management Pod within a single data center.

In addition to the pod, the RAN environment is a combination of far/near edge pods and single-host cell sites.

Note:

Far edge sites can deploy a 2-node vSAN with an external witness for small vSAN-based deployments but are not considered for compute workload pods.

The distribution and quantity of cell sites are higher than far/near-edge or compute pods. Due to factors such as cost, power, cooling, and space, the cell site has fewer active components (single servers) in a restricted environment.

Each workload domain can host up to 2,500 ESXi hosts and 40,000 VMs. The host can be arranged in any combination of compute or far/near-edge pods (clusters) or individual cell sites.

Note:
  • Cell sites are added as standalone hosts in vSphere, and they do not have cluster features such as HA, DRS, and vSAN.

  • To maintain marginal room for growth and to avoid overloading a component, the maximums of the component should not exceed 75-80%.

The Edge pod is deployed as an additional component in the workload domain for Telco Cloud Platform Advanced architectures. The Edge pod provides North/South routing from the Telco Cloud to the IP Core and other external networks. Without NSX, the connectivity to external networks is based on L2 connectivity.

The deployment architecture of an edge pod is similar to the workload domain pods when using ESXi as the hypervisor. This edge pod is different from NSX Bare Metal edge pods.

Virtual Infrastructure Management

The Telco Cloud Infrastructure tier includes a common resource orchestration platform called Virtual Infrastructure Manager (VIM) for traditional VNF workloads. Telco Cloud Platform Essentials and Telco Cloud Platform Advanced support Cloud Director as the VIM.

The VIM interfaces with other components of Telco Cloud Platform (vSphere, NSX) to provide a single interface for managing VNF deployments.

VMware Cloud Director

VMware Cloud Director is used for the cloud automation plane. It supports, pools, and abstracts the virtualization platform in terms of virtual data centers. It provides multi-tenancy features and self-service access for tenants through a native graphical user interface or API. The API allows programmable access for both tenant consumption and the provider for cloud management.

The cloud architecture contains management components deployed in the management cluster and resource groups for hosting the tenant workloads. Some of the reasons for the separation of the management and compute resources include:

  • Different SLAs such as resource reservations, availability, and performance for user plane and control plane workloads

  • Separation of responsibilities between the CSP and NF providers

  • Consistent management and scaling of resource groups

The management cluster runs the cloud management components. vSphere resource pools are used as independent units of compute within the workload domains.

VMware Cloud Director is deployed as one or more cells in the management domain. Some of the common terminologies used in VMware Cloud Director include:

  • Catalog is a repository of vAPP / VM templates and media available to tenants for VNF deployment. Catalogs can be local to an Organization (Tenant) or shared across Organizations.

  • External Network provides the egress connectivity for Network Functions. External networks can be VLAN-backed port groups or NSX networks.

  • Network Pool is a collection of VM networks that can be consumed by the VNFs. Network pools can be Geneve or VLAN / Port-Group backed for Telco Cloud.

  • Organization is a unit of tenancy within VMware Cloud Director. An organization represents a logical domain and encompasses its own security, networking, access control, network catalogs, and resources for consumption by network functions.

  • Organization Administrator manages users, resources, services, and policy in an organization.

  • Organization Virtual Data Center (OrgVDC) is a set of compute, storage, and networking resources provided for the deployment of a Network Function. Different methods exist for allocating guaranteed resources to the OrgVDC to ensure that appropriate resources are allocated to the Network Functions.

    Organization VDC Networks are networking resources that are available within an organization. They can be isolated to a single organization or shared across multiple organizations. Different types of Organization VDC networks are Routed, Isolated, Direct, or Imported network segments.

  • Provider Virtual Data Center (pVDC) is an aggregate set of resources from a single VMware vCenter. pVDC contains multiple resource pools or vSphere clusters and datastores. OrgVDCs are carved out of the aggregate resources provided by the pVDCs.

  • vAPP is a container for one or more VMs and their connectivity. vAPP is a common unit of deployment into an OrgVDC from within VMware Cloud Director.

Note:

When upgrading Cloud Director, ensure that external platforms integrating with Cloud Director use API versions that are not deprecated or removed. To determine which versions of API are supported or removed, refer to the latest release notes for each Cloud Director release.

RabbitMQ

RabbitMQ is the default message queue that is required for communication between Cloud Director and Telco Cloud Automation. It acts as an intermediary for messaging, providing applications a common platform to send and receive messages.

Cloud Director requires the configuration of RabbitMQ (AMQP) in notification mode to interface with Telco Cloud Automation.

Because queues are mirrored, messages are consumed identically regardless of the node to which a client connects. RabbitMQ provides high scalability.