A Three-Pod configuration provides flexibility, performance, and VNF distribution design choice for telco workloads.
Footprint
The best practice when using vSAN as shared storage for all clusters, is to use a minimum of four hosts per cluster for the initial deployment, which sums up a total of 12 hosts. This creates balance between the implementation footprint and resiliency, while maintaining the operational requirements for each Pod. A mid-level class of servers can be used in this configuration, because the design maximizes the scale flexibility.
Scale Flexibility
The separation of Pods provides maximum scalability to the design. Individual Pods can be scaled to meet the target services, delivery, and topology objectives.
The Resource and Edge clusters are sized according to the VNFs and their respective networking requirements. CSPs must work with the VNF vendors to gather the requirements for the VNFs to be deployed. This information is typically available in deployment and sizing guides. As more tenants are provisioned, CSPs must provide additional resources to the Resource Pod to support the VNF growth. The increase in the VNF workloads in the Resource Pod can lead to an increase in the North-South network traffic, which in turn requires adding more compute resources to the Edge Pod so that to scale up the Edge Nodes. CSPs must closely monitor and manage the resource consumption and capacity that is available to the Edge Pod when the Edge Nodes are scaled. For higher performance, an Edge Pod can also comprise of bare metal Edges that are grouped in an Edge Cluster in NSX-T Data Center.
High level of resiliency is provided by using four-host clusters with vSAN storage in the initial deployment. Four-host clusters also ensure a highly available Management Pod design, because clustered management components such as vCenter Server active, standby, and witness nodes, can be placed on separate hosts. This same design principle is used for clustered OpenStack components such as database nodes.
The initial number and sizing of management components in the Management Pod should be planned in advance. As a result, the capacity that is required for the Management Pod can remain steady. When planning the storage capacity of the Management Pod, the CSP should consider the operational headroom for VNF files, snapshots, backups, VM templates, OS images, upgrade and log files.
When the Resource Pod is scaled up by adding hosts to the cluster, the newly added resources are automatically pooled resulting in added capacity to the compute cluster. New tenants can be provisioned to consume resources from the total pooled capacity that is available. Allocation settings for existing tenants must be modified before they can benefit from the increased resource availability. OpenStack compute nodes are added to scale out the Resource Pod. Additional compute nodes can be added by using the Integrated OpenStack Manager vSphere Web Client extension.
In the Edge Pod, additional Еdge Nodes can be deployed in the cluster in conjunction with the physical leaf-spine fabric for higher capacity needs.
Performance
The platform is optimized for workload acceleration and the Three-Pod design offers maximum flexibility to achieve these goals. The configuration can support the needs of use cases with high bandwidth requirements. A dedicated DPDK accelerated Resource Pod or a hybrid standard and accelerated Resource Pod can be considered when designing for workload distribution. The separate Edge Pod provides not only the separation benefit but also alternatives with VM and bare metal options for the Edge Nodes.
Function Distribution
Network function designs are evolving to disaggregated control and data plane function. The separation also maximizes the distribution of data plane functions closer to the edge of the network with centralized control and management planes. A separated Resource or Edge Pod design allows for having different control plane versus data plane configurations and scale designs depending on the service offers.
Operations Management
The Management Pod provides centralized operation function across the deployment topology, be it a single or distributed data centers.