VNF vendors are responsible for providing onboarding instructions to the customers. This section provides general onboarding guidance assuming that the VNF is packaged in the format as stated in the VNF Format and Packaging section for VMware Integrated OpenStack.

After the initial VNF requirements, images, and formats are clarified, a project must be created to deploy the VNF in an operational environment. Projects are the VMware Integrated OpenStack constructs that map to tenants. Administrators create projects and assign users to each project. Permissions are managed through definitions for user, group, and project. Users have a further restricted set of rights and privileges. Users are limited to the projects to which they are assigned, although they can be assigned to more than one project. When a user logs in to a project, the user is authenticated by Keystone. The authenticated user can then perform operations within the project.

The following diagram illustrates the physical, virtualized, and logical layers of the stack upon which the VNFs are onboarded.

Figure 1. VNF Onboarding Conceptual Design

VNF onboarding

Resource Allocation

When building a project for the VNF, the administrator must set the initial quota limits for the project. To guarantee resources to a VNF-C, a Tenant vDC can be used. A Tenant vDC provides resource isolation and guaranteed resource availability for each tenant. Quotas are the operational limits that configure the amount of system resources that are available per project. Quotas can be enforced at a project and user level. When a user logs in to a project, they see an overview of the project including the resources that are provided for them, the resources they have consumed, and the remaining resources. For fine-grained resource allocation and control, the quota of the resources that are available to a project can be further divided using Tenant vDCs.

VNF Networking

Based on specific VNF networking requirements, a tenant can provision East-West connectivity, security groups, firewalls, micro-segmentation, NAT, and LBaaS from by using the VMware Integrated OpenStack user interface or command-line. VNF North-South connectivity is established by connecting tenant networks to external networks through NSX-T Data Center gateways that are deployed in Edge Nodes. External networks are created by CSP administrators and backed by physical networks.

Tenant networks are accessible by all Tenant vDCs within the project. Therefore, the implementation of East-West connectivity between VNF-Cs in the same Tenant vDC, and the connectivity between VNFs in two different Tenant vDCs belonging to the same project, is identical. Tenant networks are implemented as segments within the project. The North-South network is a tenant network that is connected to the telecommunications network through an N- VDS Enhanced for data-intensive workloads or by using N-VDS Standard through an NSX Edge Cluster.

VMware Integrated OpenStack exposes a rich set of API calls to provide automation. The deployment of VNFs can be automated by using a Heat template. With API calls, the upstream VNF-M and NFVO can automate all aspects of the VNF life cycle.

Host Affinity and Anti-Affinity Policy

The Nova scheduler provides filters that can be used to ensure that VMware Integrated OpenStack instances are automatically placed on the same host (affinity) or separate hosts (anti- affinity). Affinity or anti-affinity filters can be applied as a policy to a server group. All instances that are members of the same group are subjected to the same filters. When an OpenStack instance is created, the server group to which the instance belongs is specified and therefore a filter is applied. These server group policies are automatically realized as DRS VM-VM placement rules. DRS ensures that the affinity and anti-affinity policies are maintained both at initial placement and during runtime load balancing.

DRS Host Groups for Placement

In VMware Integrated OpenStack, Nova compute nodes map to vCenter Server clusters. The cloud administrator can use vSphere DRS settings to control how specific OpenStack instances are placed on hosts in the Compute cluster. In addition to the DRS configuration, the metadata of source images in OpenStack can be modified to ensure that instances generated from those images are correctly identified for placement.