Tanzu Kubernetes clusters are deployed in the compute workload domains.
The Telco Cloud Platform consumes resources from the compute workload domain. Resource pools provide guaranteed resource availability to workloads. Resource pools are elastic; more resources can be added as its capacity grows. A resource pool can map to a single vSphere cluster or stretch across multiple vSphere clusters. The stretch feature is not available without the use of a VIM such as VMware Cloud Director. Each Kubernetes cluster can be mapped to a Resource Pool. A resource pool can be dedicated to a K8s cluster or shared across multiple clusters.
Design Recommendation |
Design Justification |
Design Implication |
---|---|---|
Map the Tanzu Standard for Telco Kubernetes Clusters to the vSphere Resource Pool in the compute workload domain. |
Enables resource guarantee and resource isolation. |
During resource contention, workloads can be starved for resources and experience performance degradation.
Note:
Proactively perform monitoring and capacity management, and add the capacity before the contention occurs. |
Create dedicated DHCP IP subnet pools for the Tanzu Standard for Telco Cluster Management network. |
|
|
Place the Tanzu Standard for Telco Kubernetes cluster management network on a virtual network that is routable to the management network for vSphere, Harbor, and repository mirror. |
|
|
Enable 1:1 Kubernetes Cluster to Resource Pool mapping for data plane intensive workloads.
Reduced resource contention can lead to better performance.
Better resource isolation, resource guarantee, and reporting.
Enable N:1 Kubernetes Cluster to Resource Pool mapping for control plane workloads where resources are shared.
Efficient use of the server resources
High workload density
Use vRealize Operation Manager to provide recommendations on the required resource by analyzing performance statistics.
Consider the total number of ESXi hosts and Kubernetes cluster limits.
Management and Workload Kubernetes Clusters
A Kubernetes cluster in the Telco Cloud Platform consists of etcd and the Kubernetes control and data planes.
Etcd must run in cluster mode with odd number of cluster members to establish a quorum. A 3-node cluster tolerates the loss of a single member, while a 5-node cluster tolerates the loss of two members. In a stacked mode deployment, etcd availability determines the number of Kubernetes Control nodes.
Control Plane node: The Kubernetes control plane must run in redundant mode to avoid a single point of failure. To improve API availability, HA proxy is placed in front of the Control Plane nodes. The load balancer must perform health checks to ensure the API server availability. The following table lists the HA characteristics of the Control node components:
Component |
Availability |
---|---|
API Server |
Active/Active |
Kube-controller-manager |
Active/Passive |
Kube-scheduler |
Active/Passive |
Do not place CNF workloads on the control plane nodes.
Worker Node
5G workloads are classified based on their performance. Generic workloads such as web services, lightweight databases, monitoring dashboards, and so on, are supported adequately using standard configurations on Kubernetes nodes. In addition to the recommendations outlined in the Tuning vCloud NFV for Data Plane Intensive Workloads whitepaper, the data plane workload performance can benefit from further tuning in the following areas :
NUMA Topology
CPU Core Affinity
Huge Pages
NUMA Topology: When deploying Kubernetes worker nodes that host high data bandwidth applications, ensure that the processor, memory, and vNIC are vertically aligned and remain within a single NUMA boundary.
The topology manager is a new component in the Kubelet and provides NUMA awareness to Kubernetes at the pod admission time. The topology manager figures out the best locality of resources by pulling topology hints from the Device Manager and the CPU manager. Pods are then placed based on the topology information to ensure optimal performance.
Topology Manager is optional, if the NUMA placement best practices are followed during the Kubernetes cluster creation.
CPU Core Affinity: CPU pinning can be achieved in different ways. Kubernetes built-in CPU manager is the most common. The CPU manager implementation is based on cpuset. When a VM host initializes, host CPU resources are assigned to a shared CPU pool. All non-exclusive CPU containers run on the CPUs in the shared pool. When the Kubelet creates a container requesting a guaranteed CPU, CPUs for that container are removed from the shared pool and assigned exclusively for the life cycle of the container. When a container with exclusive CPUs is terminated, its CPUs are added back to the shared CPU pool.
The CPU manager includes the following two policies:
None: Default policy. The kubelet uses CFS quota to enforce pod CPU limits. The workload can move between different CPU cores depending on the load on the Pod and the available capacity on the worker node.
Static: With the static policy enabled, the CPU request results in the container getting allocated the whole CPU and no other container can schedule on that CPU.
For data plane intensive workloads, the CPU manager policy must be set to static to guarantee an exclusive CPU core on the worker node.
CPU Manager for Kubernetes (CMK) is another tool used by selective CNF vendors to assign the core and NUMA affinity for data plane workloads. Unlike the built-in CPU manager, CMK is not bundled with Kubernetes binaries and it requires separate download and installation. CMK must be used over the built-in CPU manager if required by the CNF vendor.
Huge Pages: For Telco workloads, the default huge page size can be 2 MB or 1 GB. To report its huge page capacity, the worker node determines the supported huge page sizes by parsing the /sys/kernel/mm/hugepages/hugepages-{size}kB directory on the host. Huge pages must be set to pre-allocated for maximum performance. Pre-allocated huge pages reduce the amount of available memory on a worker node. A node can only pre-allocate huge pages for the default size. The Transport Huge Pages must not be enabled.
Container workloads requiring huge pages use hugepages-<hugepagesize> in the Pod specification. As of Kubernetes 1.18, multiple huge page sizes are supported per Pod. Huge Pages allocation occurs at the pod level.
Recommended Tuning Details:
Design Recommendation |
Design Justification |
Design Implication |
---|---|---|
Three Control nodes per Kubernetes cluster to ensure full redundancy |
3-node cluster tolerates the loss of a single member |
|
Limiting a single Kubernetes Cluster to a single Linux distribution and version reduces operational overhead. |
|
|
Install and activate the NTP clock synchronization service with custom NTP servers. |
Kubernetes and its components rely on the system clock to track events, logs, state, and so on. |
None |
Ensure that Swap is not enabled on all Kubernetes Cluster Nodes. |
Swap causes a decrease in the overall performance of the cluster. |
None |
Vertically align Processor, memory, and vNIC and keep them within a single NUMA boundary for data plane intensive workloads. |
|
|
Set the CPU manager policy set to static for data plane intensive workloads. |
When the CPU manager is used for CPU affinity, static mode is required to guarantee exclusive CPU cores on the worker node for data-intensive workloads. |
|
When enabling static CPU manager policy, set aside sufficient CPU resources for the kubelet operation. |
|
|
Enable huge page allocation at boot time. |
|
|
Set the default huge page size to 1 GB. Set the overcommit size to 0. |
|
|
Mount the file system type hugetlbfs on the root file system. |
|
|