VMware vCloud® NFV™ 3.0 Release Notes | 6 DEC 2018
NOTE: These Release Notes do not provide the terms applicable to your license. Consult the VMware Product Guide and VMware End User License Agreement for the license metrics and other license terms applicable to your use of VMware vCloud NFV.
Check for additions and updates to these Release Notes.
What's in the Release Notes
These Release Notes apply to vCloud NFV 3.0 and cover the following topics:
- What's New in this Release
- Components of vCloud NFV 3.0
- Validated Patches
- Caveats and Limitations
- Release Notes Change Log
- Support Resources
- Known Issues
- VMware NSX-T Data Center Switch with Overlay Support. When used with vSphere 6.7, the N-VDS switch supports a high-performance mode that is called Enhanced datapath. With this mode, NSX-T Data Center provides a hypervisor-based virtual switch that is considerably faster than the vSphere standard or distributed switches. N-VDS in Enhanced datapath mode provides superior performance for both small and large packet sizes tested for typical telco workloads. This new capability can serve the NFV use cases that need a high performance virtual switch without having to sacrifice any of the benefits of virtualization such as vMotion and DRS. The Enhanced datapath mode implements some of the key DPDK techniques like Poll Mode Driver (PMD), Flow cache, and optimized packet copy. Some of the specific benefits of the Enhanced datapath mode include:
- Ease of configuration. Easy allocation of compute resources to N-VDS for data intensive workloads.
- Automated NUMA alignment. N-VDS Enhanced has an in-built function that automatically aligns the VNF processing cores, the PMD logical CPU cores, and the physical NIC on the same NUMA node. This automatic alignment delivers the best packet processing performance as there is no cross NUMA communication.
- Full vSphere Support. The underlying N-VDS supports key vSphere functionality like HA, vMotion, and DRS.
- Linear scale. Performance scales in a linear way as the industry adopts newer physical NICs with higher capacity. The N-VDS switch provides the flexibility to add additional logical CPU cores for PMD operations and typically exhibits a linear traffic increase with each additional logical CPU core.
- Isolation of data plane workloads. As the vSphere Distributed Switch works together with N-VDS in Standard mode, this delivers separation and isolation of data plane workload management from management and control plane workloads.
- Performance features. The new network stack also delivers a number of features for improved performance, such as multi-tiered routing, bare-metal edges, HugePages support in ESXi that is up to 1GB for high performance Translation Lookaside Buffers.
vCloud Director for Service Providers Integration with NSX-T Data Center
- vCloud Director for Service Providers 9.5. This release of vCloud NFV includes VMware vCloud Director for Service Providers 9.5, which is the first release of vCloud Director to deliver native integration with NSX-T Data Center thus providing advanced networking features.
- Dual Stack Support with NSX Data Center for vSphere and NSX-T Data Center. This release of vCloud Director can be combined with support for NSX Data Center for vSphere in the same vCloud Director installation. This permits the following deployment options:
- Coexistence of NSX-T Data Center with NSX Data Center for vSphere in NFV 2.1 deployments.
- New deployments that use a combination of NSX Data Center for vSphere and NSX-T Data Center that comes packaged as part of this vCloud NFV bundle.
The coexistence requires vCloud Director for Service Providers 9.5 that is included in this release of vCloud NFV.
Multi-Tenancy and Enhanced Role-Based Access Control
vCloud Director 9.5 enables a redesigned and backward-compatible multi-tenant Role Based Access Controls (RBAC) model. This allows Service Providers to create Global Tenant Roles and Rights Bundles.
- Global Tenant Roles. System administrators can create and edit global tenant roles and publish them to one or more organizations. Global tenant roles can be assigned to tenant users in the organizations to which they are published. Organization administrators cannot edit global tenant roles.
- Rights bundles. System administrators can use rights bundles to manage the rights that are available to each organization. A rights bundle is a set of rights that the system administrator can publish to one or more organizations. The system administrator can create and publish rights bundles that correspond to tiers of service, separately monetizable functionality, or any other arbitrary rights grouping. Only system administrators can view and manage the rights bundles. You can publish multiple bundles to the same organization.
- Tenant self-service RBAC. Tenant administrators can define their own tenant specific roles in the Tenant UI. This allows self-service management of permissions for tenant users by their own administrators, without the need of service provider involvement.
Carrier Grade Networking and Security
- VMware NSX-T Data Center. This release of vCloud Director includes the configuration and deployment of the N-VDS switch in Enhanced datapath mode. While significant NSX Manager features are available, only some of them are available through the vCloud Director interface. The remaining can be configured through NSX Manager and imported as Administrator provisioned networks, which are external Networks in vCloud Director.
- VMware NSX Data Center for vSphere. You can now use the rich feature set available in NSX Data Center for vSphere for management and control plane workloads, while leveraging the N-VDS Enhanced feature in NSX-T Data Center to achieve high data plane performance for data-intensive workloads. The workloads run on independent clusters configured with NSX Data Center for vSphere for their management and control plane workloads, while leveraging the N-VDS and NSX-T Data Center connecting to the respective NSX Managers running concurrently under the same vCloud Director instance.
- Enhanced Bidirectional Forwarding Detection (BFD). Delivers advanced fault convergence performance for failure detection.
These features are able to leverage the time-tested features of vSphere such as vMotion and HA.
- vRealize Operations Manager 7.0. vCloud NFV 3.0 includes vRealize Operations Manager 7.0 that contains several major new operations management functionality. It combines this with an advanced native plug-in for vCloud Director for Service Providers to enable tenant views of operational analytics and performance management.
- vRealize Orchestrator Integration for closed-loop remediation. vRealize Orchestrator is tightly integrated with vRealize Operations Manager to enable closed-loop workflows for advanced remediation, including optimization, proactive avoidance, performance, and so on.
- Policy based Assurance. This stack delivers advanced policy-based assurance management where customers can express deployment policies that segment workload placement based on licensing, resource management policies, and capacity policies and vCenter Server tags and Latency based Placement.
- CPU-aware vMotion. Along with new features in vSphere 6.7, the system can now honor vMotion at a VM level to leverage advanced CPU capabilities based on CPU generations.
- Analytic data and forecasts. With faster data collection and aggregation, the assurance stack of vCloud NFV delivers current analytics data for just in time forecasts and performance remediation.
- New Plugin for vRealize Orchestrator. With vCloud Director 9.5 there is a new plugin for vRealize Orchestrator available. This new plugin allows workflows to interact with the latest version of the vCloud Director API (version 31.0). This enables the workflow developer to automate all the new functionality in vCloud Director 9.5. The new plugin version supports multi-site vCloud Director environments, so that workflows can be executed on connections to normal as well as to multi-site enabled vCloud Director connections.
Note: Due to the changes in the plug-in API some Actions are modified, therefore existing custom workflows have to be reviewed and the current version of the Action added anew if needed. For details, see the VMware vRealize Orchestrator Plug-In for vCloud Director 9.5 Release Notes.
API and SDK Enhancements
- vCloud Director 9.5 introduces vCloud API version 31.0, adding new functionality such as Auth 2.0 SSO support and an API to change the ownership of catalog items.
- Support for vCloud API versions less or equal than version 19.0 has been removed, API versions 20.0 to 26.0 are deprecated in vCloud Director 9.5.
- With the new API version also the Python SDK (Latest Version 20.0.0) and the VCD-CLI (Latest Version 21.0.0) are released.
See the vCloud Director 9.5 for Service Providers Release Notes for more details.
- vCloud Director now provides an easy onramp for customers leveraging flexible, on-demand containers and VMs in the same virtual data center and faster time-to-consumption for Kubernetes.
- A new version of the Open Sourced Container Service Extension (CSE) has been published on GitHub: https://github.com/vmware/container-service-extension
- This new version of Container Service Extension includes:
- Support for Kubernetes Version 1.10
- Implementation of Static Persistent Volumes via NFS
Deprecated and Discontinued Functionality
For further details on this and other API related announcements, see vCloud Director 9.5 for Service Providers Release Notes.
Components of vCloud NFV 3.0
Included in the vCloud NFV Hard Bundle
- VMware ESXi 6.7 U1. See the VMware ESXi 6.7 Update 1 Release Notes.
- VMware vSphere Replication 8.1.1. See the VMware vSphere Replication 8.1.1 Release Notes.
- VMware vSAN 6.7 U1 Standard Edition. See the VMware vSAN 6.7 U1 Release Notes. Binaries are distributed as part of a VMware vSphere download. Requires a separate activation license key that is included as part of the vCloud NFV Suite.
- VMware vRealize Orchestrator Appliance 7.5. See the VMware vRealize Orchestrator 7.5 Release Notes.
- VMware vRealize Operations Manager 7.0 Advanced Edition. See the vRealize Operations Manager 7.0 Release Notes.
- VMware vRealize Log Insight 4.7 Full Edition. See the vRealize Log Insight 4.7 Release Notes.
- VMware vCloud Director 9.5 for Service Providers. See the vCloud Director 9.5 for Service Providers Release Notes.
Mandatory Add-On Components (Not Part of the vCloud NFV Bundle, Additional License is Required)
- VMware NSX-T Data Center 2.3. See the VMware NSX-T Data Center 2.3 Release Notes.
- VMware NSX Data Center for vSphere. You can use one of the following versions:
- New 6.4.4. See VMware NSX Data Center for vSphere 6.4.4 Release Notes.
- 6.4.3. See the VMware NSX Data Center for vSphere 6.4.3 Release Notes.
- VMware vCenter Server 6.7 U1. See the VMware vCenter Server 6.7 Update 1 Release Notes.
Optional Add-On Components (Not Part of the vCloud NFV Bundle, Additional License is Required)
- VMware Site Recovery Manager 8.1.1. See the VMware Site Recovery Manager 8.1.1 Release Notes.
- VMware vRealize Network Insight 3.9. See the vRealize Network Insight 3.9 Release Notes.
VMware ESXi 6.7 Patch Release ESXi670-201811001, see VMware ESXi 6.7, Patch Release ESXi670-201811001.
- New Geneve overlay use with N-VDS in Enhanced datapath mode. To use N-VDS in Enhanced datapath mode with overlay networking, you must use Intel 7xx series NICs with a firmware version of 6.01 and higher.
|5 FEB 2019||
|18 JUN 2019||
To access product documentation and additional support resources, go to the VMware vCloud NFV Documentation page.
- vRealize Operations Manager is not fetching data from vCloud Director for a Provider VDC that is backed by vCenter Server with NSX-T Data Center
vRealize Operations Manager does not display data from vCloud Director for a Provider VDC that is backed by vCenter Server and NSX-T Manager combination. The vRealize Operations Manager management pack for vCloud Director 9.5 does not support NSX-T Data Center.
Use the vRealize Operations Manager management pack for vCloud Director version 5.1 that supports NSX-T Data Center. The management pack is available at VMware Solutions Exchange.
- ESXi vmkernel panic might occur when the MTU size is set to 9000 inside Windows guest OS
Inside Windows guest OS, the vmxnet driver offers 9000 or 1500 as MTU size. When the MTU size is set to 9000 and after the encapsulation for an overlay network, the final packet size becomes larger than 9000. Such packets are dropped and vmkernel system panic might occur.
Workaround: For vmxnet drivers in Windows guest OS, use only 1500 MTU size.
- ESXi vmkernel panic occurs when a VM name starts with “SRIOV”
PSOD occurs during VM power on, when the VM name starts with “SRIOV” and there is an SR-IOV adapter attached to that VM.
Workaround: Avoid using “SRIOV” at the beginning of VM names.
- vCenter Server information is not displayed in the vCloud Director for Service Providers 9.5 Service Provider Admin Portal
After registering vCenter Server and NSX-T Data Center through the vCloud Director API, vCenter Server and NSX Manager information is not available on the following location in the vCloud Director Service Provider Admin Portal:
System -> Manage & Monitor -> vSphere Resources -> vCenters
Workaround: UI functionality for significant NSX-T Data Center features is not available in this release. You can retrieve vCenter Server information through the vCloud Director NSX-T API. See the vCloud API reference guide for more information.
- You are unable to change the vNIC adapter type in the vCloud Director 9.5 Service Provider Admin Portal
In the Service Provider Admin Portal of vCloud Director 9.5 that is backed by NSX-T Data Center, you cannot change the vNIC adapter type to vmxnet3, E1000, and SR-IOV adapter types.
Workaround: Use the vCloud Director 9.5 API to change the vNIC type.
- You cannot change the networking configuration on an SR-IOV adapter
If a VM is configured with an SR-IOV adapter and 100% memory reservation is set for that VM, you cannot change the networking configuration of the SR-IOV adapter if the Reserve all guest memory(all locked) checkbox is selected in the vSphere Client.
Workaround: Avoid selecting Reserve all guest memory(all locked) in the vSphere Client. Let vCloud Director reserve the memory for the VM.