The vCloud NFV infrastructure components use ESXi to virtualize the compute resources, NSX for vSphere to provide virtual networking, and vSAN for storage. Together, these components create the virtualization layer described by the ETSI NFV framework.

The virtualization layer of the NFVI provides the following functions:

Physical Resource Abstraction. Using the software component layers between physical hardware and the VNFs, physical resources are abstracted. This provides a standardized software based platform for running VNFs, regardless of the underlying hardware. As long as the CSP uses certified physical components, VNFs can be deployed by the carrier at the point of presence (POP), distributed, or centralized data center.

Physical Resource Pooling. Physical resource pooling occurs when vCloud NFV presents a logical virtualization layer to VNFs, combining the physical resources into one or more resource pools. Resource pooling together with an intelligent scheduler facilitates optimal resource utilization, load distribution, high availability, and scalability. This allows for fine grained resource allocation and control of pooled resources based on the specific VNF requirements

Physical Resource Sharing. In order to truly benefit from cloud economies, the resources pooled and abstracted by a virtualization layer must be shared between various network functions. The virtualization layer provides the functionality required for VNFs to be scheduled on the same compute resources, collocated on shared storage, and to have network capacity divided among them. The virtualization layer also ensures fairness in resource utilization and usage policy enforcement.

The following components constitute the virtualization layer in the NFVI domain:

Compute - VMware ESXi

ESXi is the hypervisor software used to abstract physical x86 server resources from the VNFs. Each compute server is referred to as a host in the virtual environment. ESXi hosts are the fundamental compute building blocks of vCloud NFV. ESXi host resources can be grouped together to provide an aggregate set of resources in the virtual environment, called a cluster. Clusters are used to logically separate between management components and VNF components and are discussed at length in the Reference Architecture section of this document.

ESXi is responsible for carving out resources needed by VNFs and services. ESXi is also the implementation point of policy based resource allocation and separation, through the use of VMware vSphere® Distributed Resource Scheduler™ (DRS), an advanced scheduler which balances and ensures fairness in resource usage in a shared environment.

Since ESXi hosts VNF components in the form of virtual machines (VMs), it is the logical place to implement VM based high availability, snapshotting, migration with VMware vSphere ® vMotion®, file based backups, and VM placement rules. ESXi hosts are managed by vCenter Server Appliance, described as one of the VIM components in the VIM Components section of this document.

An example of one of the new high availability mechanisms available with VMware vCloud NFV 2.0 is Proactive High Availability (HA). While VMware vSphere® High Availability can rapidly restore VNF components if a host fails, Proactive HA has tighter integration with several server health monitoring systems, which means that VNF components can be migrated away from a host whose health is degrading. This function is realized using vSphere vMotion to move live, running workloads to healthy hosts. vSphere vMotion is also used to facilitate maintenance tasks and load balancing among hosts in a cluster, with no or minimal service disruption.

Storage - VMware vSAN

vSAN is the native vSphere storage component in the NFVI virtualization layer, providing a shared storage pool between hosts in the cluster. With vSAN, storage is shared by aggregating the local disks and flash drives attached to the host. Although third-party storage solutions with storage replication adapters that meet VMware storage compatibility guidelines are also supported, this reference architecture discusses only the vSAN storage solution.

It is a best practice recommendation that each cluster within vCloud NFV is configured to use a shared storage solution. When hosts in a cluster use shared storage, manageability and agility improve.

Network - VMware NSX for vSphere

The third component of the NFV infrastructure is the virtualized networking component, NSX for vSphere. NSX for vSphere allows CSPs to programmatically create, delete, and restore software based virtual networks. These networks are used for communication between VNF components, and to give customers dynamic control of their service environments. Dynamic control is provided through tight integration between the VIM layer and NSX for vSphere. Network multitenancy is also implemented using NSX for vSphere, by assigning different customers their own virtual networking components and providing different network segments to each.

Just as ESXi abstracts the server resources, NSX for vSphere provides a layer of abstraction by supporting an overlay network with standards based protocols. This approach alleviates the limitations of traditional network segmentation technologies such as VLANs, while creating strict separation between management, customer, and service networks. NSX for vSphere is designed as three independent layers: the data plane, the control plane, and the management plane. The data plane and control plane layers are described in the bullet points below, while the management plane is described in the VIM Components section of this document.

VMware NSX ® Virtual Switch™

The NSX Virtual Switch is a distributed data plane component within the ESXi hypervisor kernel that is used for the creation of logical overlay networks, facilitating flexible workload placement of the VNF components. The NSX Virtual Switch is based on the VMware vSphere® Distributed Switch™ (VDS) and extends VDS functionality by adding distributed routing, a logical firewall, and enabling VXLAN bridging capabilities. The NSX Virtual Switch is central to network virtualization, as it enables logical networks that are independent of physical constructs, such as VLANs. The NSX Virtual Switch is a multilayer switch and therefore supports Layer 3 functionality to provide optimal routing between subnets directly within the host, for communication within the data center.

VMware NSX ® Edge™

The NSX Edge acts as the centralized virtual appliance for routing traffic in to and out of the virtual domain, toward other virtual or physical infrastructure. This is referred to as North South communication. In its role in vCloud NFV design, the NSX Edge is installed as an Edge Services Gateway (ESG). The ESG is used to provide routing, firewalling, network address translation (NAT), and other services to the consumers of the NFVI platform. These NSX ESG instances, together with NSX Virtual Switches, provide true logical tenant isolation.

VMware NSX ® Controller™

The NSX Controller is the control plane responsible for the creation of the logical topology state necessary for connectivity between the components that form a VNF. Consisting of three active virtual controller appliances, the NSX Controller nodes form a cluster to maintain NSX Controller availability. The NSX Controller communicates with the ESXi hosts to maintain connectivity to the data plane components using out-of-band connectivity.