The NFV infrastructure consists of ESXi to virtualize the compute resources, NSX for vSphere to provide virtual networking, and vSAN for storage. Together these components create the virtualization layer that the ETSI NFV framework defines.
The virtualization layer of the NFVI provides the following functions:
-
Physical Resource Abstraction. By using the software component layers between the physical hardware and the VNFs, physical resources are abstracted. This provides a standardized software based platform for running VNFs, regardless of the underlying hardware. As long as the CSP uses certified physical components, VNFs can be deployed by the carrier at the point of presence (POP), distributed, or centralized data center.
-
Physical Resource Pooling. Physical resource pooling occurs when vCloud NFV OpenStack Edition presents a logical virtualization layer to VNFs, combining the physical resources into one or more resource pools. Resource pooling together with an intelligent scheduler facilitates optimal resource utilization, load distribution, high availability, and scalability. This enables fine grained resource allocation and control of pooled resources based on specific VNF requirements.
-
Physical Resource Sharing. To truly benefit from cloud economies, the resources that are pooled and abstracted by a virtualization layer must be shared between various network functions. The virtualization layer provides the functionality required for VNFs to be scheduled on the same compute resources, collocated on shared storage, and to have network capacity divided between them. The virtualization layer also ensures fairness in the resource utilization and usage policy enforcement.
Compute - VMware ESXi
ESXi is the hypervisor software that abstracts the physical x86 server resources from the VNFs. Each compute server is referred to as a host in the virtual environment. ESXi hosts are the fundamental compute building blocks of vCloud NFV. ESXi host resources can be grouped together to provide an aggregate set of resources in the virtual environment that is called a cluster. Clusters logically separate the management and VNF components and are discussed in details in Reference Architecture. ESXi hosts are managed by the VMware vCenter Server Appliance that is part of the VIM components, see VIM Components for more information.
Storage - VMware vSAN
vSAN is the native vSphere storage component in the NFVI virtualization layer, providing a shared storage pool between the hosts in the cluster. With vSAN, storage is shared by aggregating the local disks and flash drives that are attached to the host. Although third-party storage solutions with storage replication adapters that meet VMware storage compatibility guidelines are also supported, this reference architecture discusses only the vSAN storage solution.
It is a best practice that each cluster within vCloud NFV OpenStack Edition is configured to use a shared storage solution. When hosts in a cluster use shared storage, manageability and agility improve.
Network - VMware NSX for vSphere
The third component of the NFV infrastructure is the virtualized networking component, NSX for vSphere. NSX for vSphere allows CSPs to programmatically create, delete, and restore software based virtual networks. These networks are used for communication between VNF components, and to give customers dynamic control of their service environments. Dynamic control is provided through tight integration between the VIM layer and NSX for vSphere.
Network multitenancy is implemented by using NSX for vSphere, by assigning different customers their own virtual networking components and providing different network segments to each . Just as ESXi abstracts the server resources, NSX for vSphere provides a layer of abstraction by supporting an overlay network with standards based protocols. This approach alleviates the limitations of traditional network segmentation technologies such as VLANs, while creating strict separation between management, customer, and service networks. NSX for vSphere is designed as three independent layers: the data plane, the control plane, and the management plane. The data plane and control plane layers are described in the bullet points here, while the management plane is described in VIM Components.
-
VMware NSX® Virtual Switch™. The NSX Virtual Switch is a distributed data plane component within the ESXi hypervisor kernel that is used for the creation of logical overlay networks, facilitating flexible workload placement of the VNF components. The NSX Virtual Switch is based on the VMware vSphere® Distributed Switch™ and extends vSphere Distributed Switch functionality by adding distributed routing, a logical firewall, and enabling VXLAN bridging capabilities. The NSX Virtual Switch is central to network virtualization, as it enables logical networks that are independent of physical constructs, such as VLANs. The NSX Virtual Switch is a multilayer switch and therefore supports Layer 3 functionality to provide optimal routing between subnets directly within the host, for communication within the data center.
The NSX Virtual Switch supports stateful firewall services through the distributed firewall service known as micro-segmentation. This functionality provides firewall policy enforcement within the hypervisor kernel at the granularity of the virtual Network Interface Card (vNIC) level on a VNF component, thereby supporting fine grained network multitenancy.
-
VMware NSX® Edge™. The NSX Edge acts as the centralized virtual appliance for routing traffic into and out of the virtual domain, toward other virtual or physical infrastructure. This is referred to as North South communication. In its role in vCloud NFV design, the NSX Edge is installed as an Edge Services Gateway (ESG). The ESG is used to provide routing, firewalling, network address translation (NAT), and other services to consumers of the NFVI platform. These NSX ESG instances, together with NSX Virtual Switches, provide true logical tenant isolation.
-
VMware NSX® Controller™. The NSX Controller is the control plane responsible for the creation of the logical topology state that is necessary for connectivity between the components that form a VNF. Consisting of three active virtual controller appliances, the NSX Controller nodes form a cluster to maintain NSX Controller availability. The NSX Controller communicates with the ESXi hosts to maintain connectivity to the data plane components by using out-of-band connectivity.