This section outlines the design best practices of the Telco Cloud Automation (TCA) components including TCA Manager, TCA-Control Plane, NodeConfig Operator, Container registry, and CNF designer.

VMware Telco Cloud Automation with infrastructure automation provides a universal 5G Core and RAN deployment experience to service providers. Infrastructure automation for 5G core and RAN allows telco providers and telco administrators to provide a virtually zero IT touch and virtually zero infrastructure onboarding experience. The Telco Cloud Automation appliance automates the deployment, configuration, and provisioning of RAN sites.

Network Administrators can provision new telco cloud resources, monitor changes to the RDC and cell sites, and manage other operational activities. VMware Telco Cloud Automation enables consistent, secure infrastructure and operations across Central Data Centers, Regional Data Centers, and Cell Sites with increased enterprise agility and flexibility.

Telco Cloud Automation Components

Telco Cloud Automation is a domain orchestrator that provides life cycle management of VNFs, CNFs, and the infrastructure on which they run. Telco Cloud Automation consists of two major components:

  • Telco Cloud Automation Manager (TCA Manager) provides orchestration and management services for Telco clouds.

  • Telco Cloud Automation Control Plane (TCA-CP) performs multi‑VIM/CaaS registration, synchronizes multi‑cloud inventories, and collects faults and performance logs from infrastructure to network functions.

TCA-CP and TCA Manager components work together to provide Telco Cloud Automation services. TCA Manager connects with TCA-CP nodes through site pairing. It relies on the inventory information captured from TCA-CP to deploy and scale Tanzu Kubernetes clusters. TCA manager does not communicate with the VIM directly. Workflows are always posted by the TCA manager to the VIM through TCA-CP.

The Kubernetes cluster bootstrapping environment is completely abstracted into TCA-CP. The binaries and cluster plans required to bootstrap the Kubernetes clusters are pre-bundled into the TCA-CP appliance. After the base OS image templates are imported into the respective vCenter Servers, Kubernetes admins can log into the TCA manager and start deploying Kubernetes clusters directly from the TCA manager console.

Design Recommendation

Design Justification

Design Implication

Integrate the TCA Manager with active directory integration for more control over user access.

  • TCA-CP SSO integrates with vCenter Server SSO.

  • LDAP enables centralized and consistent user management.

Requires additional components to manage in the Management cluster.

Deploy a single instance of the TCA manager (of a permissible size) to manage all TCA-CP endpoints.

  • Single point of entry into CaaS.

  • Simplifies inventory control, user onboarding, and CNF onboarding.

Larger deployments with significant scale may require multiple TCA-Managers

Register the TCA manager with the management vCenter Server.

Note: Use an account with relevant permissions to complete all actions.

Management vCenter Server is used for TCA user onboarding.

None

Deploy a dedicated TCA-CP node to control the Tanzu Kubernetes management cluster.

Required for the deployment of the Tanzu Kubernetes management cluster.

TCA-CP requires additional CPU and memory in the management cluster.

Each TCA-CP node controls a single vCenter Server. Multiple vCenter Servers in one location require multiple TCA-CP nodes.

Cannot distribute TCA-CP to vCenter Server mapping

  • Each time a new vCenter Server is deployed, a new TCA-CP node is required.

  • To minimize recovery time in case of TCA-CP failure, each TCA-CP node must be backed up independently, along with the TCA manager.

Deploy TCA manager and TCA-CP on a shared LAN segment for management communication.

  • Simplifies connectivity between the Telco Cloud Platform management components.

  • TCA manager and TCA-CP share the same level of the security trust domain.

None

Share the vRealize Orchestrator deployments across all TCA-CP and vCenter Server pairs.

Consolidated vRO deployment reduces the number of VRO nodes to deploy and manage.

Requires vRO to be high-available, if multiple TCA-CP endpoints are dependent on a shared deployment

Deploy a vRO cluster using three nodes.

Ensures the high availability of vRO for all TCA-CP endpoints.

vRO redundancy requires an external Load Balancer.

Schedule TCA manager and TCA-CP backups around the same time as SDDC infrastructure components.

Note: Your backup frequency and schedule might vary based on your business needs and operational procedure.

  • Minimizes the database synchronization issues upon restore.

  • Proper backup of all TCA and SDDC components is crucial to restore the system to its working state in the event of a failure.

  • Time-consistent backups taken across all components require less time and effort upon restore.

Backups are scheduled manually. TCA admin must log into each component and configure a backup schedule and frequency.

CaaS Infrastructure

Tanzu Kubernetes Cluster automation in Telco Cloud Automation starts with Kubernetes templates that capture deployment configurations for a Kubernetes cluster. The cluster templates are a blueprint for Kubernetes cluster deployments and are intended to minimize repetitive tasks, enforce best practices, and define guard rails for infrastructure management.

A policy engine is used to honor SLA required for each template profile by mapping the Telco Cloud Infrastructure resources to the Cluster templates. Policies can be defined based on the tags assigned to the underlying VIM or based on the role and role permission binding. Hence, the appropriate VIM resources are exposed to a set of users, thereby automating the SDDC to the Kubernetes Cluster creation process.

The CaaS Infrastructure automation in Telco Cloud Automation consists of the following components:

  • TCA Kubernetes Cluster Template Designer: TCA admin uses the Tanzu Kubernetes Cluster designer to create Kubernetes Cluster templates to help deploy Kubernetes clusters. A Kubernetes Cluster template defines the composition of a Kubernetes cluster. A typical Kubernetes cluster template includes attributes such as the number and size of Control and worker nodes, Kubernetes CNI, Kubernetes storage interface, and Helm version. The TCA Kubernetes Cluster template designer does not capture CNF-specific Kubernetes attributes but instead leverages the VMware NodeConfig operator through late binding. For late binding details, see TCA VM and Node Config Automation Design.

  • SDDC Profile and Inventory Discovery: The Inventory management component of Telco Cloud Automation can discover the underlying infrastructure for each VIM associated with a TCA-CP appliance. Hardware characteristics of the vSphere node and vSphere cluster are discovered using the TCA inventory service. The platform inventory data is made available by the discovery service to the Cluster Automation Policy engine to assist the Kubernetes cluster placement. TCA admin can add tags to the infrastructure inventory to provide additional business logic on top of the discovered data.

  • Cluster Automation Policy: The Cluster Automation policy defines the mapping of the TCA Kubernetes Cluster template to infrastructure. VMware Telco Cloud Platform allows TCA admins to map the resources using the Cluster Automation Policy to identify and group the infrastructure to assist users in deploying high-level components on them. The Cluster Automation Policy indicates the intended usage of the infrastructure. During cluster creation, TCA validates whether the Kubernetes template requirements are met by the underlying infrastructure resources.

  • Kubernetes Bootstrapper: When the deployment requirements are met, TCA generates a deployment specification. The Kubernetes Bootstrapper uses the Kubernetes cluster APIs to create a cluster based on the deployment specification. Bootstrapper is a component of the TCA-CP.

Design Recommendation

Design Justification

Design Implication

Deploy v2 clusters or convert existing workload clusters into v2 clusters.

  • Provides a framework that supports the deployment of additional PaaS-type components into the cluster

  • Provides access to advanced cluster topologies

Requires adaptation of any automation process that is currently used to build v1 clusters

When creating a Tanzu Kubernetes management cluster template, define a single network label for all nodes across the cluster.

Tanzu Kubernetes management cluster nodes require a single NIC per node.

None

When creating workload clusters, define only network labels required for Tanzu Kubernetes management and CNF OAM using network labels.

  • Network labels are used to create vNICs on each node.

  • Data plane vNICs that require SR-IOV are added as part of the node customization during the CNF deployment.

  • Late binding of vNIC saves resource consumption on the SDDC infrastructure. Resources are allocated only during CNF instantiation.

None

When creating workload Cluster templates, enable Multus CNI for clusters that host Pods requiring multiple NICs.

  • Multus CNI enables the attachment of multiple network interfaces to a Pod.

  • Multus acts as a "meta-plugin", a CNI plugin that can call multiple other CNI plugins.

Multus is an upstream plugin and follows the community support model.

When creating workload Cluster templates, enable whereabouts if cluster-wide IPAM is required for secondary Pod NICs.

  • Simplifies IP address assignment for secondary Pod NICS.

  • Whereabouts is cluster wide compared to the default IPAM that comes with most CNIs such as macvlan.

Whereabout is now available to be deployed through the Add-on Framework

When defining workload clusters, enable nfs_client CSI for multiaccess read and write support.

When using vSAN, RWX clusters can be supported natively through vSAN File Services and the vSphere-CSI driver.

Some CNF vendors require the support to read/write many persistent volumes. NFS provider supports Kubernetes RWX persistent volume types.

NFS backend must be onboarded separately, outside of Telco Cloud Automation.

When defining workload clusters, enable CSI zoning support for workload clusters that span vSphere clusters or standalone ESXi hosts.

Enables Kubernetes to use vSphere storage resources that are not equally available to all nodes.

Requires creating zone tags on vSphere objects.

When defining a workload cluster, if a cluster is designed to host CNFs with different performance profiles, create a separate node pool for each profile. Define unique node labels to distinguish node members among node pools.

  • Node Labels can be used with Kubernetes scheduler for CNF placement logic.

  • Node pool simplifies the CNF placement logic when a cluster is shared between CNFs with different placement logics.

Too many node pools might lead to resource underutilization.

Pre-define a set of infrastructure tags and apply the tags to SDDC infrastructure resources based on the CNF and Kubernetes resource requirements.

Tags simplify the grouping of infrastructure components. Tags can be based on hardware attributes or business logic.

Infrastructure tag mapping requires administrative-level visibility into the infrastructure composition.

Pre-define a set of CaaS tags and apply the tags to each Kubernetes cluster defined by the TCA admin.

Tags simplify the grouping of Kubernetes templates. Tags can be based on hardware attributes or business logic.

Kubernetes template tag mapping requires advanced knowledge of CNF requirements.

Kubernetes template mapping can be performed by the TCA admin with assistance from Kubernetes admins.

Pre-define a set of CNF tags and apply the tags to each CSAR file uploaded to the CNF catalog.

Tags simplify the searching of CaaS resources.

None

Important:

After deploying the resources with Telco Cloud Automation, you cannot rename infrastructure objects such as datastores or resource pools.

Core Capabilities of Telco Cloud Automation

  • xNF management (G‑xNFM) unifies and standardizes the network function management across VM‑ and container‑based infrastructures.

  • Domain Orchestration (NFVO) simplifies the design and management of centralized or distributed multi‑vendor network services.

  • Multi‑Cloud Infrastructure and CaaS Automation eases multi-cloud registration (VIM/Kubernetes), enables centralized CaaS management, synchronizes multi‑cloud inventories/resources, and collects faults and performance logs from infrastructure to network functions.

  • Policy and Placement Engine enables intent‑based and multi‑cloud workload/ policy placements from the network core to edge, and from private to public clouds.