check-circle-line exclamation-circle-line close-line

VMware Validated Design for Software-Defined Data Center 4.3 Release Notes

Last updated: 21 AUG 2018

VMware Validated Design for Software-Defined Data Center 4.3 | 17 JUL 2018

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

About VMware Validated Design for Software-Defined Data Center 4.3

VMware Validated Designs provide a set of prescriptive documents that explain how to plan, deploy, and configure a Software-Defined Data Center (SDDC). The architecture, the detailed design, and the deployment guides provide instructions about configuring a dual-region SDDC.

VMware Validated Designs are tested by VMware to ensure that all components and their individual versions work together, scale, and perform as expected. Unlike Reference Architectures which focus on an individual product or purpose, a VMware Validated Design is a holistic approach to design, encompassing many products in a full stack for a broad set of use case scenarios in an SDDC.

This VMware Validated Design supports a number of use cases, and is optimized for integration, expansion, Day-2 operations, as well as future upgrades and updates. As new products are introduced, and new versions of existing products are released, VMware continues to qualify the cross-compatibility and upgrade paths of the VMware Validated Designs. Designing with a VMware Validated Design ensures that future upgrade and expansion options are available and supported.

VMware Software Components in the Validated Design

VMware Validated Design for Software-Defined Data Center 4.3 is based on a set of individual VMware products with different versions that are available in a common downloadable package.

The products included in VMware Validated Designs participate in VMware's Customer Experience Improvement Program ("CEIP"). VMware recommends that you join CEIP because this program provides us with information used to improve VMware products and services, fix problems, and advise you on how best to deploy and use our products.

Details regarding the data collected through CEIP and the purposes for which it is used by VMware are set forth at the Trust & Assurance Center at http://www.vmware.com/trustvmware/ceip.html. To join or leave VMware's CEIP for the products that are part of VMware Validated Designs, see the documentation for each product.

Product Group and Edition Product Name Product Version
VMware vSphere Enterprise Plus ESXi 6.5 U2
vCenter Server Appliance 6.5 U2
vSphere Update Manager 6.5 U2
vSphere Replication 6.5.1.3
VMware vSAN Standard or higher vSAN 6.6.1 U2
VMware NSX Data Center Advanced or higher NSX Data Center for vSphere 6.4.1
VMware vRealize Suite Lifecycle Manager vRealize Suite Lifecycle Manager 1.2
VMware vRealize Operations Manager Advanced or higher vRealize Operations Manager 6.7
vRealize Operations Management Pack for NSX for vSphere 3.5.2
vRealize Operations Management Pack for Storage Devices 6.0.5
vRealize Operations Management Pack for Site Recovery Manager 6.5.1.1
VMware vRealize Log Insight vRealize Log Insight 4.6
vRealize Log Insight Content Pack for NSX for vSphere 3.7
vRealize Log Insight Content Pack for vRealize Automation 7.3+ 2.1
vRealize Log Insight Content Pack for vRealize Orchestrator 7.0.1+ 2.0
vRealize Log Insight Content Pack for vRealize Business 1.0
vRealize Log Insight Content Pack for Microsoft SQL Server 3.2
vRealize Log Insight Content Pack for Linux 1.0
vRealize Log Insight Content Pack for Site Recovery Manager 1.5
VMware vRealize Automation Advanced or higher vRealize Automation 7.4
VMware vRealize Business for Cloud Advanced vRealize Business for Cloud 7.4
VMware Site Recovery Manager Enterprise Site Recovery Manager 6.5.1.1

New For certain optional add-on guidance, you can also deploy the following products: 

Product Group or Edition Product Name Product Version
VMware NSX Data Center Advanced or higher VMware NSX-T Data Center 2.2

VMware makes available patches and releases to address critical security issues for several products. Verify that you are using the latest security patches for a given component when deploying VMware Validated Design.

VMware Solution Exchange and in-product marketplace store only the latest versions of the management packs for vRealize Operations Manager and the content packs for vRealize Log Insight. This table contains the latest versions of the packs that were available at the time this VMware Validated Design was validated. When you deploy the VMware Validated Design components, it is possible that the version of a management or content pack on VMware Solution Exchange and in-product marketplace is newer than the one used for this release.

What's New

VMware Validated Design for Software-Defined Data Center 4.3 provides the following new features:

  • Updated Bill of Materials that incorporates new product versions
  • Consolidated SDDC design and deployment guidance is available at GA, alongside the core set of Standard SDDC documents.
  • Using vRealize Lifecycle Manager for the automated deployment of the vRealize Suite products is now a part of the design and deployment SDDC guidance

    As a part of the operations management layer, vRealize Suite Lifecycle Manager provides deployment, lifecycle management and configuration drift management in an SDDC that is compliant with VMware Validated Design for Software-Defined Data Center 4.3.

    At this stage, vRealize Suite Lifecycle Manager is not used in the upgrade of the vRealize Suite components to VVD 4.3.
     
  • Upgrade guidance provides the best path to upgrade from VMware Validated Design 4.2.
    • The upgrade step-by-step instructions inherit the manual upgrade approach from the earlier versions of VMware Validated Design. You expand the vRealize Automation appliance cluster and add vRealize Suite Lifecycle Manager to the stack as a part of the post-upgrade tasks in the cloud management and operations management layers, respectively.
    • The guidance covers upgrade in the case of multiple availability zones, and supports dual region deployments
  • Log dashboards for vRealize Business are now available in vRealize Log Insight
  • The vRealize Automation appliance cluster is expanded to three nodes to support automatic failover of the PostgreSQL database
  • New Design and deployment guidance for VMware NSX-T 2.2 for a compute workload domain is now available as a part of VMware Validated Design

    Use this documentation to implement a compute workload domain using NSX-T as the network virtualization solution. Then, you can extend the services in the compute workload domain with a cloud management system.
  • New VMware Validated Design for Micro-Segmentation is not provided for this version of VMware Validated Design

For more information, see the VMware Validated Design for Software-Defined Data Center page.

Internationalization

This VMware Validated Design release is available only in English.

Compatibility

This VMware Validated Design guarantees that product versions in the VMware Validated Design for Software-Defined Data Center 4.3, and the design chosen, are fully compatible. Any minor known issues that exist are described in this release notes document.

Installation

To install and configure an SDDC according to this validated design, follow the guidance in the VMware Validated Design for Software-Defined Data Center 4.3 documentation. For product download information, and guides access, see the VMware Validated Design for Software-Defined Data Center page.

Caveats and Limitations

To install vRealize Automation, you must open certain ports in the Windows firewall. This VMware Validated Design instructs that you disable the Windows firewall before you install vRealize Automation. It is possible to keep Windows firewall active and install vRealize Automation by opening only the ports that are required for the installation. This process is described in the vRealize Automation Installation and Configuration documentation.

Known Issues

The known issues are grouped as follows.

VMware Validated Design Content
  • New The destination network that you select during deployment of the NSX Manager Appliance in VMware Validated Design for Workload and Management Consolidation is incorrect

    In this Validated Design for Workload and Management Consolidation, when completing the Assign an NSX Domain Service Account and Deploy the NSX Manager Appliance for Consolidated SDDC procedure, in Step 18 you select the sfo01-w01-vds01-management destination network. This destination network is not correct and results in the the appliance being unable to communicate over the network.

    When deploying the NSX Manager Appliance for Validated Design for Workload and Management Consolidation, in Step 18 of the Assign an NSX Domain Service Account and Deploy the NSX Manager Appliance for Consolidated SDDC procedure, you select the sfo01-w01-vds01-management-vm network as the Destination Network.

  • Updated The sizing for the vSAN capacity tier for the ESXi hosts in the management workload domain is incorrect in the Architecture and Design document in the documentation package on my.vmware.com

    In this VMware Validated Design, the required capacity for vSAN per ESXi host is a minimum of 200 GB of SSD for the caching tier and 2 TB of traditional HDD for the capacity tier. The management workload domain requires a minimum of 9 TB of raw capacity (without taking into account the Failures To Tolerate overhead). Using a vSAN policy of Failures to Tolerate (FTT) of 1, and accounting for 30% as overhead buffer, you must provide a minimum of approximately 24 TB of raw capacity-tier storage.

    Workaround: Per ESXi host, allocate a minimum of 300 GB of SSD space for the caching tier and 6 TB of traditional HDD space for the capacity tier. 

    The new sizing supports a vSAN cluster that can accommodate disk version upgrade, lifecycle management by using vRealize Suite Lifecycle Manager, N-1 host maintenance mode, and so on.

    See vSAN Physical Design and vSAN Cluster and Disk Group Design in the Architecture and Design documentation on docs.vmware.com.

vSphere
  • If a host that runs the vCenter Server Appliance and is managed by the same vCenter Server instance fails, vCenter Server is not restarted on another host in the cluster as a part of the vSphere HA failover and thus becomes inaccessible

    vSphere HA does not restart the vCenter Server Appliance from the failed host. Because of a cluster reconfiguration or a race condition, the Fault Domain Manager (FDM) master can receive an empty compatibility data set about the vCenter Server virtual machine from vSphere DRS. To fill this data in, the master must contact the host that is running vCenter Server. Because the host has failed, the data set remains empty and vSphere HA does not have enough information to restart the vCenter Server virtual machine on another host.

    Workaround:

    • Recover the failed host which will restart the vCenter Server Appliance on the same host
    • Manually re-register the vCenter Server virtual machine on another healthy host in the storage user interface in the vSphere Web Client and power it on. See VMware Knowledge Base article 2147569.
  • vRealize Automation converged blueprint provisioning fails with error: CloneVM : [CloneVM_Task] - A general system error occurred: vDS host error: see faultCause

    vRealize Automation converged blueprint provisioning fails because an attempt to perform a networking configuration operation on a vSphere Distributed Switch, such as creating a virtual machine adapter or a port group, causes the vSphere host to disconnect from the vCenter Server and results in the error message:

    Transaction has rolled back on the host.

    Workaround:

    Increase the network rollback timeout of vCenter Server from 30 to 60 seconds.
    See Networking Configuration Operation Is Rolled Back and a Host Is Disconnected from vCenter Server in the vSphere Troubleshooting documentation. 

NSX for vSphere
  • vCenter Server user with administrator role cannot assign an NSX for vSphere license

    vCenter Server accepts an NSX for vSphere license when the account for registration of NSX Manager with vCenter Server is administrator@vsphere.local.

    Accounts that are not associated with vCenter Single Sign-On have no privileges to assign a license for NSX for vSphere. When using an Active Directory service account such as svc-nsxmanager@rainpole.local to integrate NSX Manager with vCenter Server, you see the following error:

    The following serial keys are invalid.

    Workaround: See VMware Knowledge Base article 52604.

vRealize Operations Manager
  • After you perform a failover operation, the vRealize Operations Manager analytics cluster might fail to start because of an NTP time drift between the nodes
    • The vRealize Operations Manager user interface might report that some of the analytics nodes are not coming online with the status message Waiting for Analytics.
    • The log information on the vRealize Operations Manager master or master replica node might contain certain NTP-related details.
      • The NTP logs in the /var/log/ folder might report the following messages:
        ntpd[9764]: no reply; clock not set
        ntpd[9798]: ntpd exiting on signal 15
      • The analytics-wrapper.log file in the /storage/log/vcrops/logs/ folder might report the following message:
        INFO | jvm 1 | YYYY/MM/DD | >>> AnalyticsMain.run failed with error: IllegalStateException: time difference between servers is 37110 ms. It is greater than 30000 ms. Unable to operate, terminating...

         

    Workaround: See VMware Knowledge Base article 2151266.

  • Answers to Monitoring goals always show the default values

    In the Define Monitoring Goals dialog box, your answers to the monitoring goals are not saved. Every time you open the dialog box, the default values for the answers appear.

    Workaround: None.

  • After you perform disaster recovery or planned migration of the vRealize Operations Manager or Cloud Management Platform virtual machines, the vRealize Automation Adapter might be failing to collect statistics

    This issue might occur during both failover to Region B and failback to Region A of the Cloud Management Platform or the vRealize Operations Manager analytics cluster.

    After you perform disaster recovery or planned migration of the Cloud Management Platform or virtual machines of the vRealize Operations Manageran alytics cluster, the collection state of the vRealize Automation Adapter is Failed on the Administration > Solutions page of the vRealize Operations Manager user interface at https://vrops01svr01.rainpole.local.

    Workaround: Click the Stop Collecting button and click the Start Collecting button to manually restart data collection in the vRealize Automation Adapter.

  • In vRealize Log Insight, you see the name of the vRealize Operations Manager analytics under the FQDN of the master node instead of under the host name of the virtual IP address.

    In the vRealize Operations Manager operations interface, you can enable log forwarding on the analytics cluster to vRealize Log Insight only under the FQDN of the master node. The user interface does not provide an option to change the cluster name for the remote log server.

    Workaround: None

  • vSAN Capacity Overview dashboard does not display correct available free storage space under Capacity Remaining widget.

    vRealize Operations Manager vSAN Capacity Overview dashboard displays incorrect value for available storage space in the Capacity Remaining widget.

    Workaround: Examine the remaining storage capacity in the vSphere Web Client by performing the following steps: 

    1. Navigate to the vSAN cluster.
    2. On  the Monitor tab, click vSAN.
    3. Select Capacity to view the vSAN capacity information.
vRealize Log Insight
    vRealize Automation and Embedded vRealize Orchestrator
    • Manual installation of an IaaS Website component using the IaaS legacy GUI installer fails with a certificate validation error

      The error message appears when you click Next on the IaaS Server Custom Install page with the Website component selected. This error message is a false negative and appears even when you select the right option. The error prevents the installation of a vRealize Automation IaaS Website component.

      Workaround: See Knowledge Base article 2150645.

    • Unable to log in to the vRealize Automation user interface after configuring a non-existing tenant as the authentication provider for the embedded vRealize Orchestrator.

      The vRealize Automation user interface becomes unavailable after you configure the authentication settings on the Configure Authentication Provider page in the embedded vRealize Orchestrator Control Center with a non-existing tenant. For example, if you enter a tenant name with a typo.

      You see the following services as unavailable on the Services tab at https://vra01svr01a.rainpole.local:5480:

      Service State
      advanced-designer-service UNAVAILABLE
      o11n-gateway-service UNAVAILABLE
      shell-ui-app UNAVAILABLE
      vco null

      Workaround: Correct the tenant details and verify the service state on the vRealize Automation appliances.

      1. Log in to the vRealize Orchestrator Control Center. 
        1. Open a Web browser and go to https://vra01svr01.rainpole.local:8283/vco-controlcenter.
        2. Log in using the following credentials. 
          Setting Value
          User name root
          Password deployment_admin_password
      2. On the Configure Authentication Provider page, update the authentication configuration with the correct tenant details. 
      3. Wait until the control center replicates the settings to all vRealize Orchestrator servers in the cluster.
      4. Log in to the first vRealize Automation appliance.
        1. Log in to  https://vra01svr01a.rainpole.local:5480.
        2. Log in using the following credentials. 
          Setting Value
          User name root
          Password deployment_admin_password
      5. On the Services tab, verify that the status of all services is REGISTERED.
      6. Repeat Step 4 and Step 5 on the other vRealize Automation appliances.
    • Converged blueprint provisioning requests in vRealize Automation might fail in environments that have high workload churn rate

      In environments that have a high curn rate for tenant workloads, requests for provisioning converged blueprints in vRealize Automation might fail with one of the following error messages.

      • Timeout Customizing machine

      Workaround: None.

    • After you perform disaster recovery of the Cloud Management Platform, the status of the shell-ui-app service might appear as Failed in the appliance management console of the vra01svr01b.rainpole.local node

      This issue might occur during both failover to Region B and failback to Region A of the Cloud Management Platform. After you perform disaster recovery of the Cloud Management Platform, you see the follow symptoms when you verify the overall state of the platform:

      • In the appliance management console https://vra01svr01b.rainpole.local:5480, the status of the shell-ui-app service is Failed.
      • The statistics about the vra-svr-443 pool on the NSX load balancer shows that the vra01svr01b node is DOWN.
      • Trying to access the https://vra01svr01b.rainpole.local/vcac/services/api/health URL results with following error message:

        The service shell-ui-app was not able to register the service information with the Component Registry service! This might cause other dependent services to fail. Error Message: I/O error on POST request for "https://vra01svr01.rainpole.local:443/SAAS/t/vsphere.local/auth/oauthtoken?grant_type=client_credentials": Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out"

      You can still log in to the vRealize Automation portal because the other vRealize Automation Appliance vra01svr01a can service your requests.

      Workaround: Restart the vcac-server service on the vra01svr01b.rainpole.local node.

      1. Open an SSH connection to the vra01svr01b.rainpole.local appliance and log in as the root user.
      2. Restart the vcac-server service.
        service vcac-server restart
    • In the vRealize Automation portal, an attempt to edit the roles of the ug-vra-admins-rainpole user group results in an internal error.

      You perform the following steps:

      1.  Log in to the vRealize Automation portal.
        1. Open a Web browser and go to https://vra01svr01.rainpole.local/vcac/org/rainpole.
        2. Log in using the following credentials.
          Setting Value
          User name vra-admin-rainpole
          Password vra-admin-rainpole_password
          Domain rainpole.local
      2. On the Administration tab, navigate to Users & Groups > Directory Users and Groups.
      3. Enter ug-vra-admins-rainpole in the search box and press Enter.
        The ug-vra-admins-rainpole (ug-vra-admins-rainpole@rainpole.local) group name appears in the Name text box.
      4. Open the ug-vra-admins-rainpole (ug-vra-admins-rainpole@rainpole.local) user group settings to edit its roles.


      ​You see the following error:
      Internal Error
      An internal error has occurred. If the problem persists, please contact your system administrator.
      When contacting your system administrator, use this reference: ex: xxxxxxxx

      Workaround: Ignore the error message and proceed with the configuration.

    • After failover or failback during disaster recovery, login to the vRealize Automation Rainpole portal takes several minutes or fails with an error message

      This issue occurs during both failover to Region B and failback to Region A of the Cloud Management Platform when the root Active Directory is not available from the protected region. You see the following symptoms:

      • Login takes several minutes or fails with an error

        When you log in to the vRealize Automation Rainpole portal at https://vra01svr01.rainpole.local/vcac/org/rainpole using the ITAC-TenantAdmin user, the vRealize Automation portal loads after 2 to 5 minutes.

      • An attempt to log in to the vRealize Automation Rainpole portal fails with an error about incorrect user name and password.

      Workaround: Perform one of the following workarounds according to the recovery operation type.

      • Failover to Region B
        1. Log in to the vra01svr01a.rainpole.local appliance using SSH as the root user.
        2. Open the /usr/local/horizon/conf/domain_krb.properties file in a text editor.
        3. Add the following list of the domain-to-host values and save the domain_krb.properties file.
          Use only lowercase characters when you type the domain name.
          For example, as you have performed failover, you must map the rainpole.local domain to the controller in Region B: rainpole.local=dc51rpl.rainpole.local:389.
        4. Change the ownership of the domain_krb.properties.
          chown horizon:www /usr/local/horizon/conf/domain_krb.properties
        5. Open the /etc/krb5.conf file in a text editor.
        6. Update the realms section of the krb5.conf file with the same domain-to-host values that you configued in the domain_krb.properties file, but omit the port number as shown in the following example.
          [realms]
          RAINPOLE.LOCAL = {
            auth_to_local = RULE:[1:$0\$1](^RAINPOLE\.LOCAL\\.*)s/^RAINPOLE\.LOCAL/RAINPOLE/
            auth_to_local = RULE:[1:$0\$1](^RAINPOLE\.LOCAL\\.*)s/^RAINPOLE\.LOCAL/RAINPOLE/
            auth_to_local = RULE:[1:$0\$1](^SFO01\.RAINPOLE\.LOCAL\\.*)s/^SFO01\.RAINPOLE\.LOCAL/SFO01/
            auth_to_local = RULE:[1:$0\$1](^LAX01\.RAINPOLE\.LOCAL\\.*)s/^LAX01\.RAINPOLE\.LOCAL/LAX01/
            auth_to_local = DEFAULT
            kdc = dc51rpl.rainpole.local
          }
        7. Restart the workspace service.
          service horizon-workspace restart
        8. Repeat this procedure on the other vRealize Automation Appliance vra01svr01b.rainpole.local.
      • Failback to Region A
        ​If dc51rpl.rainpole.local becomes unavailable in Region B during failback, perform the steps for the failover case using dc01rpl.rainpole.local as the domain controller instead of dc51rpl.rainpole.local and restarting the services.

      This workaround optimizes the synchronization with the Active Directory by pointing to a specific domain controller that is reachable from the vRealize Automation Appliance in the event of disaster recovery.

    • In a vRealize Automation appliance cluster, the performance on the secondary appliances might be low because of multiple running socat processes which might cause functional failure if a secondary appliance takes over the master role. 

      If network connectivity is lost, the CPU usage on the secondary vRealize Automation appliance might increase to 100%. Over a certain period, the secondary appliances try to retrieve the latest state of the container service from the master node. The connectivity loss causes a logging loop every time the secondary appliance tries to open a synchronization stream using a socat process.

      Workaround: See VMware Knowledge Base article 54143

    • In a vRealize Automation with embedded vRealize Orchestrator deployment, an attempt to log in to the default tenant URL fails after you change the password of the Single Sign-on administrator user.

      In the vRealize Automation appliance management console https://vra01svr01a.rainpole.local:5480, on the vRA Settings > SSO tab, you change the password of the Single Sign-on administrator administrator@vsphere.local account.

      You see the following symptoms:

      • If you try to log in to https://vra01svr01.rainpole.local/vcac, the authentication fails.
      • In the vRealize Automation appliance management console https://vra01svr01a.rainpole.local:5480, on the Services tab, the following services show as unavailable:
        Service State
        advanced-designer-service UNAVAILABLE
        o11n-gateway-service UNAVAILABLE
        shell-ui-app UNAVAILABLE
        vco null
      • In the Control Center of vRealize Orchestrator https://vra01svr01a.rainpole.local:8283/vco-controlcenter, on the Validate Configuration page, you see that the Authentication validation has failed.
      • In the Control Center of vRealize Orchestrator https://vra01svr01a.rainpole.local:8283/vco-controlcenter, on the Configure Authentication Provider page, you see that the admin group is configured with its default value vsphere.local\vcoadmins instead of as rainpole.local\ug-vROAdmins.

      Workaround:

      1. Log in to the Control Center https://vra01svr01a.rainpole.local:8283/vco-controlcenter on the master node using the appliance root credentials.
      2. Click Configure Authentication Provider.
        The Admin group of authentication provider is set to the default value vsphere.local\vcoadmins.
      3. Click the Change button next to Admin group
      4. Enter ug-vRO, click Search, and select rainpole.local\ug-vROAdmins.
      5. Click Save changes.
      6. Log in to https://vra01svr01a.rainpole.local:5480 using the appliance root credentials.
      7. On the Services tab, verify that the status of all services is REGISTRED.
        Periodically refresh the page.
      8. In a dual-region environment, join the secondary appliances to the master node.
        1. Log in to appliance management console of the secondary vRealize Automation nodes vra01svr01b.rainpole.local and vra01svr01c.rainpole.local and, on the vRA Settings > Cluster tab, join them to the cluster.
        2. Navigate to the Services tab and verify that the status of all services is shown as REGISTERED on the secondary vRealize Automation nodes.
    • When you run a great number of blueprint provisioning requests in vRealize Automation, after some of them remain in progress for a long time, all subsequent requests remain in progress too

      If you provision more than 30 virtual machines at once, a request might remain in progress and none of the subsequent provisioning requests completes. Although some requests have failed, the failure is not reported back to the service catalog. The status of the blueprint requests never changes from in-progress to failed.

      Workaround: None.

    vRealize Business
    • Reclamation and Data Center Optimization information is incorrect in vRealize Business for Cloud.

      vCenter data collection in vRealize Business shows a warning "Unable to authenticate to vROPs using the vCenter's credentials".

      Workaround: See VMware Knowledge Base article 56142

    Site Recovery Manager and vSphere Replication
    • After you add a second or third NIC adapter with a static IP address to the vSphere Replication appliance, the VAMI interface indicates that the IPv4 Default Gateway for the NIC adapters is incorrect.

      After adding a second NIC adapter (eth1) with a configuration containing a static IP address to the vSphere Replication appliance and restarting the appliance, the VAMI interface of the appliance displays the IPv4 Default Gateway of the original NIC adapter (eth0) as empty and the IPv4 Default Gateway of the new NIC adapter (eth1) as the original default gateway of eth0.

      Adding a third NIC adapter (eth2) with a configuration containing a static IP address to the vSphere Replication appliance and restarting the appliance, the VAMI interface displays the IPv4 Default Gateway of both eth0 and eth1 as empty, while the IPv4 Default Gateway of the new NIC adapter eth2 is set to the original default gateway of eth0. 

      Workaround: Do not change the IPv4 Default Gateway field of both NIC adapters. This is a VAMI display issue.

    vRealize Suite Lifecycle Manager
    • vRealize Suite Lifecycle Manager requests triggered by the user show as 'In Progress' state and no new requests can be performed.

      User requests get stuck at some point and cannot proceed with the deployment of vRealize Operations Manager and also get stuck during vRealize Business pre-checks. Requests triggered post this issue are waiting in queue and are unable to continue.

      Workaround: See VMware Knowledge Base article 56170