vRealize Automation 7.2 Release Notes

|

Updated on: 13 JUL 2017

vRealize Automation | 22 NOV 2016 | Build 4660246

Check regularly for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

The vRealize Automation 7.2 release includes resolved issues and the following new capabilities.

  • Enhanced APIs for programmatically installing, configuring, and upgrading vRealize Automation
  • Enhanced upgrade functionality for system-wide upgrade automation
  • LDAP support for authentication and single sign-on
  • FIPS 140-2 Compliance:
    • Consumer/administrator interface is now FIPS 140-2 compliant
    • Managed using vRealize Automation appliance management console or command-line interface
    • FIPS is disabled by default
  • Migration improvements:
    • UI-driven vRealize Automation 6.2.x to 7.2 migration
    • Migration option available in Deployment Wizard
    • Enhanced support for importing vCloud Director workloads
  • Service entitlement enhancements:
    • Checkbox to add all users to an entitlement
    • Delete inactive entitlements
  • Expanded extensibility capabilities:
    • Several new event broker topics for enhanced extensibility use cases
    • Subscription granularity for individual components, catalog items, component actions, containers, or deployments
    • Leverage extensibility with new container management functionality
    • Scale-out/Scale-in custom XaaS services and applications that include XaaS objects
  • Networking Enhancements:
    • IPAM framework support for NSX on-demand routed networks
    • New network profiles to support additional IPAM use cases
    • Configure load balancing policy for NSX on-demand load balancer in the blueprint (round-robin, IP-Hash, Leastconn)
    • Configure service monitor URL for HTTP/HTTPS
  • Container Management:
    • Integrated container management engine for deploying and managing Docker Container hosts and containers
    • Build hybrid applications that include containers and traditional OS
    • New Container administrator and architect roles
    • Auto discovery of provisioned container hosts
    • Minimum supported version of Docker is 1.9.0
    • If the docker built-in load balancing is needed for clustered containers in user-defined networks, Docker 1.11+ is required
  • Azure Endpoint for Hybrid Cloud provisioning and management:
    • Seemlessly build, deliver, and manage Azure machines with vRealize Automationg
    • Support for Azure networking services
  • ServiceNow Integration:
    • Automatically expose entitled vRealize Automation catalog items to ServiceNow portal by using the Plug-in that is available in VMware Solution Exchange
  • Docker:
    • Minimum supported version of Docker is 1.9.0
    • If the Docker built-in load balancing is needed for clustered containers in user-defined networks, Docker 1.11+ is required

System Requirements

For information about supported host operating systems, databases, and Web servers, see the vRealize Automation Support Matrix.

Installation

For prerequisites and installation instructions, see Installing vRealize Automation.

Before You Upgrade

New vRealize Automation features introduce several enhancements, along with the ability to upgrade or migrate to the new version. For recommendations and guidance before you begin the upgrade process, visit vRealize Automation Upgrade Assistance Web page.

Beginning with vRealize Automation 7.2, JFrog Artifactory Pro is no longer bundled with the vRealize Automation appliance. Upgrading from an earlier version of vRealize Automation removes JFrog Artifactory Pro. For more information, see Knowledge Base article 2147237.

Resolved Issues

  • When you change the host name to a different name after the Active Directory connection is initialized, the Active Directory connector is unusable and Active Directory fails. This issue has been resolved.

  • The Destroy VMware NSX load balancer option appears as an entitled action or as an approval policy option. This issue has been resolved.

  • After a fresh installation, the master appliance node does not see the status of the replica appliance node. This issue has been resolved.

  • An issue that allowed you to extend a lease indefinitely has been resolved. The lease can now be extended by (current date + max allowed lease).

  • The JRE has been updated to include the Oracle Critical Patch Update of October 2016. The jdk-1.8.0_102 was updated to jdk-1.8.0_112.

  • Scale Out/Scale In fails in a 6.x upgraded deployment and bulk imported deployments..

  • The Download vRealize Automation Appliance Updates from a VMware Repository upgrade topic contains a prerequisite that improperly references vRealize Automation 6.2.4 or 6.2.5

Known Issues

Installation

  • New The vcac-config command fails when a parameter begins with the @ symbol
    You log in as root to the vRealize Automation appliance console, run a vcac-config command, and see an error similar to the following example.

    /usr/sbin/vcac-config prop-util -e --p '@Follow123'
    Could not read file Follow123: java.io.FileNotFoundException: Follow123 (No such file or directory)

    The problem occurs because the vcac-config command cannot accept a parameter the begins with the @ symbol. For example, a name or password that begins with the @ symbol causes the error.

    Workaround: None. Do not enter vcac-config parameters that begin with the @ symbol.

  • Database configuration fails during fresh installation of vRealize Automation 7.2 on Windows Turkish language version
    If the IaaS Server is the Windows Turkish language version, the vRealize Automation Installation Wizard fails during database configuration and displays this error message: MSB3073.

    Workaround: This issue is expected to be resolved in a future release.

  • Initial content creation process fails during installation at this step: Execute workflow to create configurationadmin user
    In the /var/log/messages there are two different executions of the create configurationadmin user process (run simultaneously, the number after va-agent.py shows that the processes are different): /usr/lib/vcac/agent/va-agent.py[18405]: info Executing vRO workflow for creating configurationadmin user... ... /usr/lib/vcac/agent/va-agent.py[18683]: info Executing vRO workflow for creating configurationadmin user... The first call creates the configuration admin user, and the second call is causing the failure.

    Workaround: The initial content creation process can be resumed by executing the following two commands from the primary appliance, parameters are self-descriptive:

    /usr/sbin/vra-command execute --node ${NODE_ID} import-asd-blueprint --ConfigurationAdminUser configurationadmin --ConfigurationAdminPassword "${CONFIGURION_ADMINISTRATOR_PASSWORD}" --DefaultTenant "${SSO_TENANT}" /usr/sbin/vra-command execute --node ${NODE_ID} execute-vro-initial-configuration-service --VidmAdminUser "${HORIZONUSER}" --VidmAdminPassword "${HORIZONPASS}" --ConfigurationAdminPassword "${CONFIGURATION_ADMINISTRATOR_PASSWORD}" --DefaultTenant "${SSO_TENANT}"

    The NODE_ID can be obtained by running: vra-command list-nodes and finding the primary virtual appliance node id.

Upgrade

  • vRealize Automation migration from 6.x to 7.2 fails if the 7.2 target environment has a different vRealize Orchestrator admin group set as the default
    The default vRealize Orchestrator admin group, vsphere.local/vcoadmin, should not be changed in the vRealize Orchestrator control center prior to migration.

    Workaround: See Knowledge Base article 2148669.

  • STOMP client cannot establish connection after upgrading tcServer to version 3.2
    In vRealize Automation 7.2, the IaaS Manager Service only supports REST polling as the connection mechanism when communicating with the event broker service. The Extensibility.Client.RetrievalMethod configuration setting is ignored.

  • IaaS Installer fails to start
    The IaaS Installer fails to start and displays this message: “A newer version of the product is already installed on this machine." This happens when the IaaS installer msi package fails to start after manually updating the IaaS Management Agent to the latest available version.
    Symptoms:

    • If manually updating the IaaS Management Agent of vRealize Automation 7.2 with the latest version available at VMware Downloads
    • If a failure occurs when starting the IaaS Installer executable after upgrading to vRealize Automation 7.2 with the IaaS upgrade shell script.
    • After upgrading to vRealize Automation 7.2, the cluster tab in the appliance management page shows the IaaS Management Agent minor version as higher compared to the other IaaS components.

    Workaround: See Knowledge Base article 2148278.

  • If you use the new Upgrade Shell Script in vRealize Automation 7.2, you must first upgrade to the latest Management Agent
    If you plan to run an automated upgrade of the IaaS components with the new Upgrade Shell Script, you must use the latest Management Agent available for download. Do not use the Management Agent that is included in the vRealize Automation 7.2 Virtual Appliance.

    Workaround: See Knowledge Base article 2147926.

  • If telemetry is disabled before you upgrade vRealize Automation from 6.2.4 or 6.2.5 to 7.2, the telemetry tab in the vRealize Automation appliance management console might show an error
    This message might appear after upgrade: Error: Unable to determine next run time. Please re-enable or disable telemetry. This message appears because no telemetry data is being collected, and so the system cannot determine a proper next running time. When this is the case, no telemetry functions can occur.

    Workaround: Choose to enable or disable telemetry using the Join the VMware Customer Experience Improvement Program checkbox and click Save Settings.

Configuring and Provisioning

  • Azure virtual machine provisioning fails if the resource group name contains non-ascii characters/p>

    Workaround: Do not use non-ascii characters in a resource group name.

  • State data collection returns only the Primary IP
    This behavior can affect your ability to use Connect using RDP, Connect using SSH, or registering a virtual machine as container host in the container service and others that rely on accessing a virtual machine using the virtual machine IP address.

    Workaround: This issue is expected to be resolved in a future release.

  • Failed to parse pool request for address space "" pool "" subpool "" error appears during networking integration tests
    The networking integration tests fail and a similar message appears in the log. This is a known Docker issue: https://github.com/docker/libnetwork/issues/1101. The root cause is that some networks are not correctly released, and Docker can reach the maximum number of networks allowed.

    Workaround: Delete containers and networks.

    1. Stop docker daemon.
      sudo systemctl stop docker.service
    2. Delete containers and networks.
      sudo rm /var/lib/docker/network/files/local-kv.db; sudo rm /var/lib/docker/containers
    3. Start docker daemon.
      sudo systemctl start docker.service

  • Integration tests sometimes fail with the error: The name "/container-name" is already used by container <hash>
    This is a known Docker issue:https://github.com/docker/docker/issues/23371. When this error occurs, the following stack trace appears:

    java.lang.IllegalStateException: Failed with Error waiting for /requests/<hash> to transition to COMPLETED. Failure: failure: Service https://dockerhost/v1.19/containers/create?name=<container-name> returned error 409 for POST. id <id>; Reason: Conflict. The name "/<container-name>" is already in use by container <hash>. You must remove (or rename) that container to be able to reuse that name.

    Workaround: Retrigger the tests. If the container that failed is the agent, you must delete containers and networks.

    1. Stop docker daemon.
      sudo systemctl stop docker.service
    2. Delete containers and networks.
      sudo rm /var/lib/docker/network/files/local-kv.db; sudo rm /var/lib/docker/containers
    3. Start docker daemon.
      sudo systemctl start docker.service

  • In a clustered setup, changing the placement zone for a host can take time before it is reflected in the UI
    When the placement zone for a host in a clustered setup is changed, the old and new placement zones might appear in the host list, although the host is immediately assigned to the new placement zone and the old assignment is not used. This happens only in a clustered setup and affects only the UI.

    Workaround: Wait five minutes for the UI to be updated.

  • Scale-out Docker containers with service links might fail with "Provisioning for container X failed... Docker returned error 500 for POST...error
    The deployment of a template that includes multiple containers with links to enable communication between multiple services, but do not include any explicit network configured to connect the containers, results in all of the containers being provisioned on the same host.

    Workaround: : Edit your template to add a new on-demand network and connect all of the containers to it. This ensures that all of the scaled-out containers are provisioned wherever the on-demand network is available and so that all the containers see each other.

  • Discovered networks might display an incorrect number of connected containers
    If you click on the number of containers displayed on each network, the list of containers could be shorter than expected.

    Workaround: None.

  • Internal error message appears when you add an Azure machine to a blueprint in the Design tab
    When using an external vRealize Orchestrator server with vRealize Automation, Microsoft Azure integration is not available.

    Workaround: Export the Azure plug-in and package from the internal vRealize Orchestrator on your vRealize Automation virtual appliance, and install or import the plug-in and package to your external vRealize Orchestrator. After you install the Azure plug-in or import the Azure package to your external vRealize Orchestrator, Microsoft Azure is supported in your vRealize Automation environment.

    1. Log in to the vRealize Orchestrator Control Center for the internal vRealize Orchestrator on your vRealize Automation virtual appliance. For instructions, see, Log in to the vRealize Orchestrator Configuration Interface.
    2. Under Plug-Ins, click Manage Plug-Ins.
    3. Find the Azure plug-in, and right-click Download plug-in in DAR file. Save the file to your desktop.
    4. Log in to the vRealize Orchestrator Control Center for your external vRealize Orchestrator. For instructions, see, Log in to the vRealize Orchestrator Configuration Interface.
    5. Under Plug-Ins, click Manage Plug-Ins.
    6. Under Install plug-in, click Browse, and locate the Azure DAR file that you downloaded to your desktop.
    7. Click Install. If prompted to confirm, click Install again.
    8. In the Control Center under Startup-Options, click Restart to finish installing the new plugin.
    9. Reboot all your vRealize Automation virtual appliances at the same time.
      Microsoft Azure integration functionality should be restored.

    If the integration does not function properly after the reboot, verify that the Azure package, com.vmware.vra.endpoint.azure, is present in the external vRealize Orchestrator. If the Azure package is not present, complete these steps.
    1. Log in to your internal vRealize Orchestrator client on your vRealize Automation virtual appliance.
    2. Export the Azure package, com.vmware.vra.endpoint.azure. For instructions, see, Export a Package.
    3. Log in to the vRealize Orchestrator client for your external vRealize Orchestrator.
    4. Import the Azure package, com.vmware.vra.endpoint.azure, to your external vRealize Orrchestrator. For instructions, see Import a Package.

  • Concurrent XaaS catalog requests calling Clone virtual machine, no customization workflow with 30 users causes some requests to fail
    While requesting XaaS blueprints which invoke vRealize Orchestrator workflows to do some operations on slow endpoints at high concurrency, some of the requests might fail with the error java.net.SocketTimeoutException: Read timed out. vRealize Orchestrator workflows can also be re-triggered multiple times due to the requests timing out.

    Workaround: Perform these steps on each vRealize Automation appliance node. The vcac.properties file is not preserved on upgrade. You must repeat these steps after upgrade.

    1. Open an SSH session on the vRealize Automation appliance.
    2. Edit /etc/vcac/vcac.properties to increase the client timeout to 10 minutes by adding the following line to the file: vco.socket.timeout.millis=600000
    3. At the command prompt, run this command to restart the vcac-server service: service vcac-server restart

  • Inventory data collection stops during a vCenter Server HA (VCHA) failover
    In rare cases, work items can get stuck in progress for a managed vSphere 6.5 endpoint during a VCHA failover.

    Workaround: Restart the vRealize Automation vSphere agent. If data collection is still stuck in progress, contact GSS.

  • vRealize Automation blueprint deployments that include NSX objects fail when provisioning to a cluster where the NSX manager has the secondary role
    In a cross-vCenter deployment of NSX, NSX universal objects, such as edge gateways, new virtual-wires, and load balancer must be provisioned utilizing the NSX manager that has the primary role. If you attempt to provision universal objects to a secondary NSX manager the process fails with an error. vRealize Automation does not support provisioning of NSX universal objects to a vSphere endpoint with network and security integration where the specified NSX manager has the secondary role.

    Workaround: To be able to use NSX global objects, you must create region specific NSX local transport zone and virtual wires. Follow VMware KB 2147240 for details on this process within a VMware Validated Design..

  • Machines provisioned to Azure persist after you delete an Azure endpoint
    Deleting an Azure endpoint leaves behind orphaned machines, blueprints and reservations. If you want to delete a certain Azure VM before you delete an Azure endpoint, delete it manually using the vRealize Automation console.

  • On a Mac, when you open a second VMware Remote Console for a single virtual machine, both consoles go blank
    Although you can open more than one VMware Remote Console (VMRC) for a single virtual machine on Windows, VMRC does not support multiple sessions. On Windows, each console is a separate process; on a Mac each console attempts to show a single process..

    Workaround: Close all VMRC instances and only open one VMRC for a given machine.

  • Reprovision of a managed virtual machine on vSphere 6.5 during a vCenter High Availability (VCHA) failover permanently deletes the virtual machine
    During a VCHA failover with vSphere 6.5, if you have a reprovision in progress with a virtual machine on the same vSphere endpoint, the virtual machine can be destroyed. This is a rare event..

    Workaround: Request the original blueprint for the destroyed virtual machine./p>

  • vRealize Automation invalid credentials error appears after a vCenter High Availability (VCHA) failover
    After a VCHA failover on a managed vSphere 6.5 endpoint, the vRealize Automation logs might contain this error message for the endpoint: Cannot complete login due to an incorrect user name or password.

    Workaround: Restart the vRealize Automation vCenter agent.

  • Changing a virtual machine reservation does not work when the owner is different
    When the register operation is invoked on a managed IaaS virtual machine, the reservation used must belong to the current virtual machine owner. Only the current owner can be specified for the user parameter. If a user who is not the current owner is specified, the system records the virtual machine as belonging to one owner in IaaS and to a different owner in the catalog.

    Workaround: Only use the Change reservation to an IaaS Virtual Machine workflow for reservations that belong to the current virtual machine owner.

  • Unable to select blueprints for bulk import of unmanaged machine on vRealize Automation 7.1 upgraded to 7.2
    IaaS passes a lower-cased tenant ID to the API that retrieves blueprints for bulk import and not the case presented by the authorization service. If the user creates a tenant ID that uses mixed-case characters, for example Rainpole rather than rainpole, the lookup fails.

    Workaround: Generate the CSV file without a blueprint name or component and then manually edit the CSV file with the desired values for those fields.

  • Nested containers do not support networks
    You cannot add a network to a nested container.

    Workaround: This issue is expected to be resolved in a future release.

  • Contents of window do not display properly after connecting to a virtual machine on vSphere 6.5 using remote console
    When connecting to a machine hosted on a vSphere 6.5 endpoint using the remote console, the connection can fail or otherwise be unusable.

    Workaround: Connect to the affected machine using the VMRC client application. Select Connect using VMRC.

  • vCloud Air endpoints require matching Organization and vDC name
    For vCloud Air endpoints, the Organization name and the vDC name must be identical for a vCloud Air subscription instance.

  • Certificate replacement fails for multi-node deployments.
    When replacing certificates in a multi-node deployment, the replacement operation will fail if you initiate it from the Virtual Appliance Management Interface on a machine that is not the master node.

    Workaround: Initiate certificate replacement only from the Virtual Appliance Management Interface on the cluster master node.

Documentation and Help

The following items or corrections did not make it into the documentation for this release.

Previous Known Issues

Show|Hide

Previous known issues are grouped as follows:

Installation

  • vRealize Automation 7.1 does not support Microsoft SQL 2016 130 mode
    The Microsoft SQL 2016 database created during the vRealize Automation wizard installation is in 100 mode. If you manually create an SQL 2016 database, it must also be in 100 mode. For related information, see the Microsoft article Prerequisites, Restrictions, and Recommendations for Always On Availability Groups.

  • Security updates affect prerequisite checker
    In this release, the Installation Wizard prerequisite checker fails when Microsoft security updates 3098779 and 3097997 are present. However, the prerequisite checker can detect the updates and prompt you to remove them using the Fix option. Afterward, you can rerun the prerequisite checker as usual.

    Workaround: Allow the Installation Wizard to remove the security updates so that the prerequisite checker will work. Alternatively, you may manually remove the updates. After finishing the wizard, you may manually reinstall updates 3098779 and 3097997.

  • Security updates affect silent installation
    In this release, Microsoft security updates 3098779 and 3097997 prevent the new silent installation feature from working properly. The updates are the same ones that affect the Installation Wizard prerequisite checker.

    Workaround: Before silent installation, you must manually remove the updates from IaaS Windows servers. You may manually reinstall updates 3098779 and 3097997 after silent installation finishes.

  • The vRealize Automation appliance page does not load correctly
    When using Internet Explorer 11 in Windows 2012 R2, the Web interface page for the vRealize Automation appliance does not load correctly.

    Workaround: Use an alternate browser to access the vRealize Automation Web interface page.

Upgrade

  • After installation of vRealize Automation 7.1 or upgrade from vRealize Automation 7.0 to 7.1, the chosen custom background image on the login page is missing
    Customized branding present in vRealize Automation 7.0 is missing on the tenant login page after upgrade to vRealize Automation 7.1. Specified customized branding does not appear in a new installation of vRealize Automation 7.1.

    Workaround: There is no workaround.

  • Migration of native Active Directory fails with errors
    At present, the SSO migration utility does not transfer an automated native Active Directory during the vRealize Automation migration process.

    Workaround: If you manually configure and launch native Active Directory, you can migrate Active Directory successfully. You must do this after you complete the vRealize Automation migration process.

  • IaaS node migration from vRealize Automation 6.2.4 to 7.1 fails when PostgreSQL server instance name contains non-ASCII characters

    Workaround: Use the Migrate a vRealize Automation Environment with an IaaS Database Backup procedure to migrate your vRealize Automation 6.2.4. environment to 7.1.

  • IaaS Management Agent configuration is corrupted after upgrade from a vRealize Automation 6.2.3 or earlier high-availability environment to 7.1
    After upgrade from vRealize Automation 6.2.2 to 7.1, the IaaS Management Agent cannot be started. An error message reports a missing node ID in the Management Agent configuration file.

    Workaround: See Knowledge Base article 2146550.

  • Scale in or scale out actions fail in an upgraded deployment
    Scale in or scale out actions are not supported for bulk-import deployments or deployments upgraded from vRealize Automation 6.x.

    Workaround: There is no workaround. New deployments made from blueprints after upgrade support scale in or scale out actions.

  • When you log in to the vRealize Automation appliance management console, an error message appears
    After you log in with the proper credentials, you receive an error message stating "Invalid server response. Please try again." This is caused by a problem with the browser cache.

    Workaround: Log out, clear your browser cache, and log in again.

  • Certain blueprints cannot be fully upgraded due to failures in updating catalog resources
    Upgraded multi-machine blueprints that contain on-demand networks or load balancer settings might not be fully functional after you upgrade to vRealize Automation 7.x.

    Workaround: After you upgrade, delete and re-create the deployments associated with multi-machine blueprints. All associated NSX Edge cleanup work must be done in NSX.

  • When you upgrade from vRealize Automation 6.2.0 to 7.0, vPostgres upgrade fails, and an error message appears
    If the system has a corrupt RPM database, this error message appears during the upgrade process: Failed to install updates(Error while running pre-install scripts).

    Workaround: For information about how to recover from an RPM database corruption, see the article "RPM Database Recovery" at the RPM Web site RPM. After you fix the problem, run the upgrade again.

  • When you run the Prerequisite Checker, the checker fails with a warning about RegistryKeyPermissionCheck, but the instructions to correct the error do not work during installation
    The Prerequisite Checker fails because it is case-sensitive for the user name.

    Workaround: Temporarily change the user you specified to run the Management Agent Service on the Windows machine to another user, and then change back to the original user by using the correct case for the user name.

  • When you upgrade the Manager Service and DEM Orchestrator system, a name validation error message appears and the Model Manager Web host cannot be validated
    The following error appears if the name of the load balancer changes in the ManagerService.exe.config file:
    Distributed Execution Manager "NAME" Cannot be upgraded because it points to Management model web host "xxxx.xxxx.xxxx.net:443", which cannot be validated. You must resolve this error before running the upgrade again: Cannot validate Model Manager Web host. The remote certificate is invalid according to the validation procedure.

    Workaround: Make the following changes to the ManagerService.exe.config configuration file. The default location is at C:\Program Files (x86)\VMware\vCAC\Server\ManagerService.exe.config.
    Change the registry values for all DEM instances. For example, the DEM instances in the following registry entries should both be updated.

    [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\VMware, Inc.\VMware vCloud Automation Center DEM\DemInstanceId02]
    "Name"="DEM"
    "Role"="Worker"
    "RepositoryAddress"="https://host_name:443/repository/"

    [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\VMware, Inc.\VMware vCloud Automation Center DEM\DemInstanceId03]
    "Name"="DEO"
    "Role"="Orchestrator"
    "RepositoryAddress"="https://host_name:443/repository/"

Configuring and Provisioning

  • In a high availability environment, Horizon fails to perform authentication after failover

    Workaround: After failover, restart the vRealize Automation appliance to restore authentication.

  • Some components might not function as expected after you drag an existing inner blueprint into a current outer blueprint
    Component settings can change depending on which blueprint the component is on. For example, if you include security groups, security tags, or on-demand networks at both the inner and outer blueprint levels, the settings in the outer blueprint override those in the inner blueprint. Network and security components are supported only at the outer blueprint level except for existing networks that work at the inner blueprint level.

    Workaround: Add all your security groups, security tags, and on-demand networks only to the outer blueprint.

  • If you create a property group with a period in the group name, you cannot use the vRealize Automation user interface to edit the group
    This issue occurs when you create a property group with a period in the group name, for example, property.group. If you use the vRealize Automation user interface to edit this property group, a blank page appears. You can use the REST API to edit this property group.

    Workaround: Avoid using a property group name that contains a period. If that is unavoidable, use the REST API to edit the group.

  • Loss of communication between IaaS and the common service catalog during destroy process leaves virtual machine in a disposing state
    If communication is lost between IaaS and the common service catalog while the destroy request is in progress but before vRealize Automation removes the virtual machine record from the database, the machine remains in a disposing state. After communication is restored, the destroy request is updated to either successful or failed, but the machine is still visible. Although the machine is deleted from the endpoint, the name remains visible in vRealize Automation management interface.

  • When you change the vRealize Automation appliance host name, services are marked as unavailable

    Workaround: If any services are unavailable after you change the host name, restart the vRealize Automation server.

  • When you join a Management Agent domain account on a cloned Windows Server 2012 to a domain, the Management Agent domain account loses its rights on the agent certificate private key
    When you use a customization wizard to clone a machine in vSphere that is part of a domain, the machine is no longer part of that domain. When you rejoin the cloned machine to the domain, the following error message appears in the Management Agent log: CryptographicException - Keyset does not exist.

    Workaround: Resolve this issue use the following procedure to open and close the security settings for the private key of the certificate without making any changes.

    1. Locate the certificate by using the Microsoft Management Console Certificates snap-in. The snap-in displays the agent ID in its Friendly name text box.
    2. Select All Tasks > Manage Private Keys.
    3. Click Advanced.
    4. Click OK.

  • Dragging an existing inner blueprint into a current outer blueprint is restricted
    When you drag an existing inner blueprint into a current outer blueprint, the following restrictions apply if the inner blueprint has machines joined to security groups, security tags, or on-demand networks. This issue might also occur on imported blueprints.
    • The outer blueprint cannot contain an inner blueprint that contains on-demand network settings or on-demand load balancer settings. Using an inner blueprint that contains an NSX on-demand network component or on-demand load balancer component is unavailable..
    • When you add new or additional security groups to machines in the inner blueprint, the machines are joined only to new security groups that are added as part of an outer blueprint, even though the Blueprint Authoring page shows security groups from the inner and outer blueprint.
    • When you add new security tags to inner machines from an outer blueprint, security tags originally associated in the inner blueprint are no longer available.
    • When you add new on-demand networks to inner machines from an outer blueprint, on-demand networks originally associated in the inner blueprint are no longer available. Existing networks originally associated in inner blueprint remain available.

    Workaround: You can resolve this issue by performing one of the following tasks:

    • Add security groups, tags, or on-demand networks to the outer blueprint but not to the inner blueprint.
    • Add security groups, tags, or existing networks to the inner blueprint but not in the outer blueprint.

  • Directory Search Attribute menu on the Add Directory page contains inaccurate information
    Some code strings that first appear in the Directory Search Attribute menu are inaccurate.

    Workaround: Click the Directory Search Attribute drop-down menu to view accurate code strings.

  • Resource not found error occurs when requesting a catalog item
    When vRealize Automation is in High Availability mode, if the master database node fails and a new master node is not promoted, all of the services that require write access to the database fail or become temporarily corrupted until a new master database is promoted.

    Workaround: You cannot avoid this error when the master database is unavailable. You can promote a new master database so that this error disappears and you are able to request resources.

  • Changes are not saved on the Blueprint Form page of an XaaS blueprint
    If you do not click Apply after you update each field on the Blueprint Form page of an XaaS blueprint, your changes are not saved.

  • Items tab does not display information about the services that are enabled for a load balancer
    For machines provisioned by using a load balancer that is associated with vCloud Networking and Security, the Items tab does not display information about the services that are enabled for that load balancer.

  • If a machine is destroyed while vSphere clone operation is in progress, the in-progress machine clone task is not canceled
    This issue might cause the machine to be cloned. The cloned virtual machine might be managed in vCenter and no longer be under vRealize Automation management.

  • When you request a composite blueprint, the request fails immediately and the request details form fails to load
    When the maximum lease days for a component blueprint are less than the number of lease days in the outer blueprint, requests fail immediately and the request details form fails to load.

  • You cannot have deployments with bindings to DHCP IP addresses in software deployments
    If you attempt to do this, the ip_address is not available if no network profile exists. The following error message appears: System error: Internal error in processing component request: com.vmware.vcac.platform.content.exceptions.EvaluationException: No data for field: ip_address.

    Workaround: If a binding is required, use static IP addresses or IP addresses managed by vRealize Automation in the network profile, or use an IPAM integration. If you use DHCP, you should bind to the host name and not to the IP address.

    You can use the following script to get the IP address of a Cent OS machine:
    IPv4_Address = $(hostname -I | sed -e 's/[[:space:]]$//')
    echo $IPv4_Address

    Bind to the value this scrip provides when the IP address is needed for DHCP use cases.

  • Directory is created even after an error message is received
    When you create a directory from Administration > Identity Stores Management > Identity Stores, and click Save, the error message, Connector communication failed because of invalid data. Problem promoting bind DN user to administrator: the user already exists and is associated with different sync client, might appear. The new Identity Store is saved with and incorrect configuration and cannot be used.
    This error occurs if you attempt to save a new Active Directory with same values for the Base DN and the Bind DN that are already used in previously successfully created and existing Active directory.

    Workaround: You must manually delete the new Active Directory because the configuration is incorrect and you must use a different Bind DN and Base DN for new Active Directory.

  • Domain is added to a user UPN when you create a directory that includes the UserPrincipalName directory search attribute
    When you create a new directory and you select UserPrincipalName for the Directory Search Attribute, a domain is added to a user UPN. For example, the vRealize Automation user name of a user with user.domain@domain.local UPN appears as user.domain@domain.local@domain.local. This happens if the UPN suffix is configured at AD site to be domain. If the UPN suffix is customized, for example to "example.com,"then the vRealize Automation user name of a user with user.domain@example.com UPN appears as user.domain@example.com@domain.local.
    If UserPrincipalName directory search attribute is used, users must enter their user name exactly as it appears (user.domain@domain.local@domain.local), including the domain, to log in to use the REST API or Cloud Client.

    Workaround: Use sAMAccountName instead of UserPrincipalName to use the user name domain uniqueness functionality of Directories Management.

  • A 404 Not Found error appears when requesting a machine on behalf of another user
    If a blueprint includes an on-demand NAT network or an on-demand load-balancer component, a 404 Not Found error appears when a deployment requested on behalf of another user is made.

  • Machines imported with Bulk Import are not mapped to the correct converged blueprint and component blueprint

    Workaround: Add the VMware.VirtualCenter.OperatingSystem custom property to each machine in the import CSV file.

    For example:
    Yes,NNNNP2-0105,8ba90c35-9e03-4ac4-8a5d-2e6d76f37b81,development-res,ce-san-1:custom-nfs-2,UNNAMED_DEPLOYMENT-0105,BulkImport,Imported_Machine,system_blueprint_vsphere,user.admin@sqa.local,VMWare.VirtualCenter.OperatingSystem,sles11_64Guest,NOP

  • Catalog Management Actions are missing in vRealize Automation

    Workaround: See Knowledge Base article 2113027.

  • An Active Directory that includes more than 15 user groups fails to list the groups when you sync the Active Directory
    If you have more than 15 groups, and you attempt to synchronize the Active Directory in the vRealize Automation management interface using Administration > Identity Stores Management > Identity Stores, only a few groups appear.

    Workaround: Click Select to view the full list.

  • After you promote a replica instance to the master instance, wrong information appears on the Database tab in the vRealize Automation master node management interface
    When the master node in the vRealize Automation appliance fails, you should use the vRealize Automation appliance management interface of a healthy node for cluster management operations.

  • Moving a datastore from one vSphere Storage DRS to another causes the system to delete instead of create a virtual machine
    If you move a datastore from one vSphere Storage DRS cluster to another vSphere Storage DRS cluster and the target cluster's automation level is not automatic, re-provisioning a created machine causes the system to delete the machine with the following error: StoragePlacement: datastore unspecified for disk in sdrs-disabled VM. This issue does not occur if the virtual machine is cloned.

    Workaround: Verify that the target cluster's automation level is set to automatic before you move a datastore from one vSphere Storage DRS cluster to another. Only single machine deployments are supported.

Some Super Hot Topic

Quisque cursus enim sem. Curabitur faucibus, odio a lobortis fringilla, sapien nibh auctor urna, vel pharetra dolor diam sed purus. Proin pulvinar nulla in vulputate tempor.

Some Super Newest KB's

Quisque cursus enim sem. Curabitur faucibus, odio a lobortis fringilla, sapien nibh auctor urna, vel pharetra dolor diam sed purus. Proin pulvinar nulla in vulputate tempor.

Some Super Most Helpfull

Quisque cursus enim sem. Curabitur faucibus, odio a lobortis fringilla, sapien nibh auctor urna, vel pharetra dolor diam sed purus. Proin pulvinar nulla in vulputate tempor.