Multitenancy defines the isolation of resources and networks to deliver applications with quality. Because multiple tenants share the same resource infrastructure, secure multitenancy can be enabled by using VMware Integrated OpenStack in a single cloud island and across distributed clouds. In addition to the built-in workload and resource optimization capabilities, predictive optimization can be enabled with analytics by using features like vSphere DRS.

CSPs can converge their resource infrastructures across their IT and Network clouds, enabling a multi-tenant IaaS realization. Consumption models can serve both internal and external tenants over the common shared infrastructure to deploy and operate their respective workloads and services.

A unit of tenancy within the scope of a VIO deployment is a Project. It is defined as a composition of dedicated compute, storage, and network resources and as workloads. The tenant is associated with a set of operational policies and SLAs. The Project can be bound to a single site or shared across many sites.

Using the Tenant Virtual Data Center available with the VMware Integrated OpenStack Carrier Edition, a CSP can create virtual datacenters for tenants under different compute nodes that offer specific service level agreements for each telecommunication workload. By using the Tenant Virtual Data Center to allocate CPU and memory for an OpenStack project or tenant on a compute node, CSPs provide a resource guarantee for tenants and avoid noisy neighbor scenarios in a multi-tenant environment.

The design objectives include considerations for the end-to-end compute and network resource isolation from the Core data center to the Edge data centers.

Management Plane

The management plane functions reside in the Edge Management Domain at the Core data center. They are responsible for the orchestration of resources and operations. The management plane functions are local to each cloud instance providing the infrastructure management, network management, and operations management capabilities.

Resource isolation for compute and networking design are enabled together with vCenter Server, NSX Manager, and VMware Integrated OpenStack. Irrespective of the Domain deployment configuration, VMware Integrated OpenStack provides the abstraction layers for multi-tenancy. vCenter Server provides the infrastructure for fine-grained allocation and partitioning of compute and storage resources, whereas NSX-T Data Center creates the network virtualization layer.

The concept of tenancy also introduces multiple administrative ownerships. A cloud provider, that is the CSP admin, can create a resource pool allocation for a tenant who in turn can manage the underlying infrastructure and overlay networking. In VMware Integrated OpenStack, multiple tenants can be defined with assigned RBAC privileges to manage compute and network resources and also the VNF onboarding.

Compute Isolation

Allocation of compute and storage resources ensures that there is an optimal footprint available to each tenant that is used to deploy workloads, with room for expansion to meet future demand.

Tenant vDCs provide a secure multitenant environment to deploy VNFs. Compute resources are defined as resource pools when a Tenant vDC is created. The resource pool is an allocation of memory and CPU from the available shared infrastructure, assignable to a Tenant vDC. More resources can be added to a pool as required. The Tenant vDC can also stretch across multiple hosts in a resource cluster residing in different physical racks. The same constructs for resource management are used to implement multitenancy in the Edge data centers.

Though the resource pools can be further sub-segmented into smaller resource pools, this is not a recommended design.

Network Isolation

The advanced networking model of NSX-T Data Center provides a fully-isolated and secure traffic paths across workloads and tenant switch or routing fabric. Advanced security policies and rules can be applied at the VM boundary to further control unwarranted traffic.

NSX-T Data Center introduces a two-tiered routing architecture which enables the management of networks at the provider (Tier-0) and tenant (Tier-1) tiers. The provider routing tier is attached to the physical network for North-South traffic, while the tenant routing context can connect to the provider Tier-0 and manage East-West communications. The Tier-0 provides traffic termination to the cloud physical gateways and existing CSP underlay networks for inter-cloud traffic communication and communication to remote Edge data centers.

Each Tenant vDC has a single Tier-1 distributed router that provides the intra-tenant routing capabilities. It can also be enabled for stateful services such as firewall, NAT, load balancer, and so on. VMs belonging to Tenant A can be plumbed to multiple logical interfaces for layer 2 and layer 3 connectivity.

By using VMware Integrated OpenStack as the IaaS layer, user profile and RBAC policies can be used to enable and restrict access to the networking fabric at the Tier-1 level.

QoS Resource Allocation

To avoid contention and starvation, QoS policies along with compute, storage, and network isolation policies must be applied consistently to the workloads.

The CSP admin can allocate and reserve compute resources for tenants by using Tenant vDCs. Every Tenant vDC is associated with a vSphere resource pool the resource settings of which are managed from VMware Integrated OpenStack. This ensures that every Tenant vDC consumes resources to which it is entitled, without exceeding the infrastructure resource limits, such as CPU clock cycles and total memory.

QoS policies can be applied to VMs so that they receive a fair share of resources across the infrastructure pool. Each VM configuration is taken from a template that is called a Flavor. QoS can be configured by using Flavor metadata to allocate CPU (MHz), memory (MB), storage (IOPS), and virtual interfaces (Mbps).

QoS can be shaped by setting boundary parameters that control the elasticity and priority of the resources that are assigned to the VNF component executing within the VM.

Reservation is the minimum guarantee. Reservations ensure a minimum guarantee to each VM when it is launched.

Limit is the upper boundary. Should be used with caution in a production environment, because it restricts the VM from bursting utilization beyond the configured boundaries.

Shares are the distribution of resources under contention. Shares can be used to prioritize certain workloads over others in case of contention. If the resources are over-provisioned across VMs and there is a resource contention, the VM with the higher shares gets the proportional resource assignment.

For control plane workload functions, a higher order elasticity can be acceptable and memory can be reserved based on the workload requirement. For data plane intensive workloads, both CPU and memory must be fully reserved. Storage IO and network throughput reservations need to be determined based on the VNF needs.