A Three-Pod configuration provides flexibility, performance, and VNF distribution design choice for telco workloads.

Footprint

When using vSAN as a shared storage for all clusters, use a minimum of four hosts per cluster for the initial deployment, which sums up a total of 12 hosts. This creates balance between the implementation footprint and resiliency, while maintaining the operational requirements for each Pod. A mid-level class of servers can be used in this configuration, because the design maximizes the scale flexibility.

Scale Flexibility

The separation of Pods provides maximum scalability to the design. Individual Pods can be scaled to meet the target services, delivery, and topology objectives.

The Resource and Edge clusters are sized according to the VNFs and their respective networking requirements. CSPs must work with the VNF vendors to gather the requirements for the VNFs to be deployed. This information is typically available in deployment and sizing guides.

As more tenants are provisioned, CSPs must provide additional resources to the Resource Pod to support the VNF growth. The increase in the VNF workloads in the Resource Pod can lead to an increase in the North-South network traffic, which in turn requires more compute resources to the Edge Pod to scale up the Edge Nodes. When the Edge Nodes are scaled, the CSPs must closely monitor and manage the resource consumption and capacity that is available to the Edge Pod. For high performance, an Edge Pod can also comprise bare metal Edges that are grouped in an Edge Cluster in NSX-T Data Center.

A high level of resiliency is provided by using four-host clusters with vSAN storage in the initial deployment. Four-host clusters also ensure a highly available Management Pod design, because clustered management components such as vCenter Server active, standby, and witness nodes, can be placed on separate hosts. This design principle is used for clustered OpenStack components such as database nodes.

The initial number and sizing of management components in the Management Pod must be planned. As a result, the capacity that is required for the Management Pod can remain steady. When planning the storage capacity of the Resource Pod, the CSP must consider the operational headroom for VNF files, snapshots, backups, VM templates, OS images, upgrade files, and log files.

When the Resource Pod is scaled up by adding hosts to the cluster, the newly added resources are automatically pooled resulting in added capacity to the compute cluster. New tenants can be provisioned to consume resources from the total pooled capacity that is available. Allocation settings for existing tenants must be modified before they can benefit from increased resource availability. OpenStack compute nodes are added to scale out the Resource Pod. Additional compute nodes can be added by using the Integrated OpenStack Manager vSphere Web Client extension.

In the Edge Pod, additional Еdge Nodes can be deployed in the cluster along with the physical leaf-spine fabric for higher capacity requirements.

Performance

The platform is optimized for workload acceleration and the Three-Pod design offers maximum flexibility to achieve these goals. The configuration can support the needs of use cases with high-bandwidth requirements. A dedicated accelerated Resource Pod or a hybrid standard and accelerated Resource Pod can be considered when designing for workload distribution. The separate Edge Pod provides not only the separation benefit but also alternatives with VM and bare metal options for the Edge Nodes.

Function Distribution

Network function designs are evolving to disaggregated control and data plane function. The separation also maximizes the distribution of data plane functions closer to the edge of the network with centralized control and management planes. A separated Resource or Edge Pod design allows for having different control plane versus data plane configurations and scale designs depending on the service offers.

Operations Management

The Management Pod provides centralized operation function across the deployment topology, be it a single or distributed data centers.