This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware Live Site Recovery 9.0 | 19 MAR 2024 | Build 23464738 | Download

Live Site Recovery Configuration Import/Export Tool 9.0 | 19 MAR 2024 | Build 23464738 | Download

Check for additions and updates to these release notes.

Product Support Notice

With the release of VMware Live Recovery, VMware Site Recovery Manager is rebranded to VMware Live Site Recovery. The rebranding is an ongoing effort and certain parts of the user interface might still use the old Site Recovery branding.

What's New in VMware Live Site Recovery 9.0

  • VMware Live Recovery

    A new solution that provides protection for ransomware and disaster recovery across the VMware Cloud in a single unified console. VMware Live Recovery (VLR) is designed to help organizations protect their VMware-based applications and data from a wide variety of threats, including ransomware attacks, infrastructure failure, human error, and more. For more information, see VMware Live Recovery.

  • Maximum number of virtual machines per protection group increased to 1500.

    VMware Live Site Recovery 9.0 increases the number of virtual machines per protection group to 1500. 

  • VMware Aria Automation Orchestrator Plug-in for VMware Live Site Recovery 9.0

  • VMware Aria Operations Management Pack for VMware Live Site Recovery 9.0

Localization

The new features and documentation of VMware Live Site Recovery 9.0 are available only in English.

Compatibility

VMware Live Site Recovery Compatibility Matrix

VMware Live Site Recovery 9.0 is compatible with vSphere 7.0 Update 3 and later, and supports ESXi versions 7.0 Update 3 and later.

VMware Live Site Recovery 9.0 requires a supported vCenter Server version on both the protected site and the recovery site.

For interoperability and product compatibility information, including support for guest operating system customization, see the Compatibility Matrices for VMware Live Site Recovery 9.0.

Compatible Storage Arrays and Storage Replication Adapters

For the current list of supported compatible storage arrays and SRAs, see the Site Recovery Manager Storage Partner Compatibility Guide.

Compatible Virtual Volumes Partner VASA Providers

For the current list of compatible Virtual Volumes Partner VASA providers, see the VMware Compatibility Guide.

VMware vSAN Support

VMware Live Site Recovery 9.0 can protect virtual machines that reside on VMware vSAN and vSAN Express Storage by using vSphere Replication. vSAN does not require a Storage Replication Adapter (SRA) to work with VMware Live Site Recovery 9.0.

Installation and Upgrade

For information about installing and upgrading VMware Live Site Recovery, see How do I set up VMware Live Site Recovery.

For the supported upgrade paths for VMware Live Site Recovery, select Upgrade Path and VMware Live Site Recovery in the VMware Product Interoperability Matrices.

NOTES:

  • If the vCenter Server instances on the protected and recovery sites are in Enhanced Linked Mode, they must be direct replication partners. Otherwise, upgrade might fail.

Network Security

VMware Live Site Recovery requires a management network connection between paired sites. The VMware Live Site Recovery Server instances on the protected site and on the recovery site must be able to connect to each other. In addition, each VMware Live Site Recovery instance requires a network connection to the Platform Services Controller and the vCenter Server instances that VMware Live Site Recovery extends at the remote site. Use a restricted, private network that is not accessible from the Internet for all network traffic between VMware Live Site Recovery sites. By limiting network connectivity, you limit the potential for certain types of attacks.

For the list of network ports that VMware Live Site Recovery requires to be open on both sites, see Network Ports for VMware Live Site Recovery.

Operational Limits for VMware Live Site Recovery 9.0

For the operational limits of VMware Live Site Recovery 9.0, see Operational Limits of VMware Live Site Recovery.

Open Source Components

The copyright statements and licenses applicable to the open source software components distributed in VMware Live Site Recovery 9.0 are available at VMware Live Site Recovery Downloads. You can also download the source files for any GPL, LGPL, or similar licenses that require the source code or modifications to the source code to be made available for the most recent generally available release of VMware Live Site Recovery.

Caveats and Limitations

  • Activating VMware Live Recovery is not supported if you have replications within a single vCenter Server (ROBO replications).

  • VMware Live Site Recovery 9.0 does not support the protection of virtual machines using persistent memory (PMem) devices.

  • In a federated environment with linked vCenter Server instances, when you log in to the REST API gateway local site this will automatically log you in to the remote site. You do not have to make a POST /remote-session request. It is not possible to log in to the remote site with a different user name.

  • The protection and recovery of encrypted virtual machines with vSphere Replication requires VMware vSphere 7.0 Update 2c or later.

  • When a linked clone virtual machine is created, some of its disks continue to use the base virtual machine disks. If you use vVols replication, you must replicate the linked clone virtual machine on the same replication group as the base virtual machine, otherwise you get the following error message: "Virtual machine '{vmName}' is replicated by multiple replication groups." If you have to replicate the base virtual machine in a different replication group than the linked clone virtual machines, or the base virtual machine cannot be replicated at all, the linked clone virtual machines must be converted to full clones.

  • VMware Live Site Recovery 9.0 does not support Virtual Volumes replication of unattached disks which are present only in a snapshot.

  • VMware Live Site Recovery does not currently support NetApp Cloud Volumes Service for Google Cloud VMware Engine neither as source nor as target for a replication.

  • VMware Live Site Recovery 9.0 does not currently support AVS ANF for NetApp ONTAP NFS storage neither as source nor as target for a replication.

  • The VMware Live Site Recovery 9.0 Configuration Import/Export Tool Importing attempts to import the recovery settings of protected virtual machines only once no matter whether the protected virtual machines are part of one or many recovery plans.

  • vSphere Flash Read Cache is disabled on virtual machines after recovery and the reservation is set to zero. Before performing a recovery on a virtual machine that is configured to use vSphere Flash Read Cache, take a note of the virtual machine's cache reservation from the vSphere Web Client. You can reconfigure vSphere Flash Read Cache on the virtual machine after the recovery.

  • VMware Live Site Recovery 9.0 supports the protection of virtual machines with uni-processor vSphere FT, but deactivates uni-processor vSphere FT on the virtual machines on the recovery site after a recovery.

    • If you use uni-processor vSphere FT on virtual machines, you must configure the virtual machines on the protected site so that VMware Live Site Recovery can deactivate vSphere FT after a recovery. For information about how to configure virtual machines for uni-processor vSphere FT on the protected site, see https://kb.vmware.com/kb/2109813.

  • VMware Live Site Recovery 9.0 supports vSphere Replication 9.0 with vSphere Virtual Volumes with the following limitations.

    • You cannot use vSphere Replications Point-in-Time Snapshots with virtual machines where the replication target is a Virtual Volumes datastore.

    • When using vSphere Virtual Volumes storage as a replication target all disks belonging to the virtual machine must be replicated to a single vSphere Virtual Volumes datastore.

    • When a replicated virtual machine is located on vSphere Virtual Volumes storage, all disks belonging to that virtual machine must be located on a single vSphere Virtual Volumes datastore.

  • VMware Live Site Recovery 9.0 does not support NFSv4.1 datastores for array-based replication. You can use VMware Live Site Recovery 9.0 with NFSv4.1 datastores for vSphere Replication.

  • To use Two-factor authentication with RSA SecureID or Smart Card (Common Access Card) authentication your environment must meet the following requirements:

    1. Use the administrator credentials of your vCenter Server to install VMware Live Site Recovery 9.0 and to pair your VMware Live Site Recovery 9.0 sites.

    2. The vCenter Server instances on both VMware Live Site Recovery 9.0 sites must work in Enhanced Linked Mode. To prevent failures during upgrade of VMware Live Site Recovery from 9.0 to a newer version of VMware Live Site Recovery, the vCenter Server instances on both sites must be direct replication partners.

Known Issues

  • vSphere Replication issues for VMware Live Site Recovery degraded and suspended modes remain after returning to operational mode

    You entered VMware Live Site Recovery degraded or suspended mode, returned to operational mode and the vSphere Replication issues remain. The problem is observed due to a delay in clearing the issues.

    Workaround: Wait one hour or restart the vSphere Replication Management server.

  • Disconnecting VMware Live Site Recovery from the cloud does not remove the entry from the Live Site Recovery UI

    When you disconnect VMware Live Site Recovery from the cloud, the Live Site Recovery user interface still shows active cloud connection. You must use the Live Site Recovery Appliance Management UI to check the connection status.

    None.

  • The VMware Live Recovery agent is slow to establish connection to vSphere Replication

    If you deploy vSphere Replication to a vCenter Server after the Site Recovery Manager instances were paired and connected to VMware Live Recovery, the VMware Live Recovery might not be applied immediately to vSphere Replication.

    Workaround: Wait for at least 7 minutes before activating VMware Live Recovery.

  • New - Uploading SRAs might fail with an unauthorized error on Google Chrome or Microsoft Edge browsers

    When uploading SRAs on Google Chrome or Microsoft Edge, the operation times out due to the following bug in Chromium https://issues.chromium.org/issues/41093011.

    Workaround: Use a different browser when uploading SRAs.

Known Issues from Previous Releases

For additional information, see the Troubleshooting VMware Live Site Recovery chapter in the VMware Live Site Recovery documentation.

  • New - VMware Live Site Recovery alarms are not visible in the alarm definitions

    VMware Live Site Recovery events might not be visible when adding an alarm definition in vCenter Server 8.0 and vCenter Server 8.0 Update 1.

    Workaround: Upgrade your vCenter Server instance to vCenter Server 8.0 Update 2.

  • Planned Migration or Failover might fail with"A general system error occurred: Sandboxd call timed out"on VM Power ON operation

    In rare cases for a large scale Failover or Planned Migration, the workflows might fail with an error "A general system error occurred: Sandboxd call timed out" during the VM Power ON operation.

    Workaround: Re-run the failed recovery plan.

  • Planned Migration or Failover might fail with an error "Unable to write VMX .." during the VM power on step.

    For vSphere Replication protection groups when you attempt to perform a planned migration or a failover at a 4000 VMs scale, the operation might fail during the VM power on step with the following error: "Unable to write VMX file: /vmfs/volumes/...vmx".

    Workaround: Re-run the recovery plan.

  • During recovery of a large-scale environment the Site Recovery UI might throw an error

    When you attempt to run a recovery of a large-scale environment, during the operation the Site Recovery UI might throw the following error:

    "Unable to retrieve recovery steps data.Unable to connect to Site Recovery Manager Server at https://VCHostname/drserver/vcdr/vmomi/sdk. Reason: https://VCHostname/drserver/vcdr/vmomi/sdk invocation failed with "java.net.SocketTimeoutException: 30,000 milliseconds timeout on connection http-outgoing-238 [ACTIVE]"

    Workaround: Refresh the Site Recovery UI, switch between the tabs, or open it in a different browser tab.

  • Planned Migration for vSphere Replication PGs might fail with an error"VR synchronization failed for VRM group 'VM Name'. A general system error occurred: VM has no replication group"

    In rare cases of the ESXi host temporarily losing connection to a target vSphere Replication replication NFS datastore storage, Planned Migration might fail with "VR synchronization failed for VRM group 'VM Name'. A general system error occurred: VM has no replication group". Even though this might be a sporadic issue, the replication might not get back to an OK status automatically.

    Workaround: Reconfigure the replication to restore the state of all the failed virtual machines. Then re-run the Planned Migration.

  • After an upgrade to VMware Live Site Recovery and vSphere Replication 9.0 on a vCenter Server 8.0.x, the local VMware Live Site Recovery integration plug-in is not removed

    There are both local and remote VMware Live Site Recovery integration plug-ins in version 8.7. After the upgrade to version 9.0, the local plug-in is no longer needed, but it is not automatically removed.

    Workaround: To remove the local integration plug-in, restart the vCenter Server client service from the vCenter Server terminal by using the following command vmon-cli -r vsphere-ui.

  • A CD/DVD device is not connected after the recovery of a Virtual Volumes protected VM, when the device points to an image file on a datastore

    When a virtual machine with a CD/DVD device pointing to an image file on a datastore is recovered, you receive the following error "Connection control operation failed for disk 'sata0:0'." and the device is not connected.

    Workaround: Recreate the device pointing to the desired image file.

  • Reprotect fails with an error "Unable to reverse replication for the virtual machine ...The operation is not allowed in the current state"

    In large-scale environments with lots of datastore modification operations, reprotect might fail due to overloaded OSFS module of vSAN.

    Workaround: Verify that there are no inaccessible virtual machines. See Virtual Machine Appears as Noncompliant, Inaccessible or Orphaned in vSAN.

  • When a virtual machine is on a vSAN datastore and is replicated by vSphere Replication with NSX-T present, it takes additional 60 seconds to recover when migrating back to the protected site after being recovered on the recovery site

    The NSX-T is storing port configuration inside the VM directory, which is not replicated by vSphere Replication. When the VM is migrated to the recovery site, the port configuration becomes invalid and is removed. Migrating the VM back to the protected site and resolving the removed port configuration causes a 60 second per VM delay when registering it in the vCenter Server inventory.

    Workaround: Fixed in ESXi version 8.0.2. Update all target recovery ESXi hosts to avoid the issue.

  • Planned migration after reprotect fails during vSphere Replication synchronization with an error

    During reprotect of large scale VMs, the VMware Crypto Manager module that takes care of the encryption keys on the hosts looses track of some of the encryption keys. As a result the planned migration cannot complete successfully and fails with the following error.

    "An encryption key is required."

    Workaround 1: Use the following PowerCLI cmdlet to unlock all locked virtual machines in the vCenter Server instance.

    Get-VM|Where-Object {$_.ExtensionData.Runtime.CryptoState -eq 'locked'} | Unlock-VM

    Workaround 2: In the vSphere UI, navigate to the Summary tab of the virtual machine and click Issues and Alarms > Virtual Machine Locked Alarm > Actions > Unlock VM.

    Workaround 3: Fixed in vCenter Server version 8.0.2 version. Update the recovery site to vCenter Server 8.0.2 to avoid the issue.

  • Disaster recovery with stretched storage freezes during the 'Change recovery site storage to writable' step

    When using Pure Storage storage arrays with unified configuration for stretched storage, if VMware Live Site Recovery on the protected site loses connection but the vCenter Server remains accessible, disaster recovery freezes during the 'Change recovery site storage to writable' step. This is related to the way Pure Storage SRA commands operate.

    Workaround: Navigate to the protected site and power off the virtual machines you are trying to recover. This unblocks the disaster recovery operation and all virtual machnies are successfully recovered.

  • Changing the value of the recovery.powerOnTimeout settings does not change the actual timeout

    When you attempt to change the value of the recovery.powerOnTimeout advanced setting, the changes do not take effect.

    Workaround:

    1. In the vSphere Client, click Site Recovery > Open Site Recovery.

    2. On the Site Recovery home tab, select a site pair, and click View Details.

    3. In the left pane, click Configure > Advanced Settings > Replication.

    4. Click Edit and set replication.archiveRecoverySettingsLifetime to 0.

    5. Repeat the steps on the other site.

    6. In the left pane, click Configure > Advanced Settings > Recovery.

    7. Click Edit and set recovery.powerOnTimeout to the required value.

    8. Repeat the step on the other site.

  • Disaster recovery fails with an error

    Disaster recovery for virtual machines residing on stretched storage fails with the following error: "A general system error occurred: Cannot allocate memory"

    Workaround: Re-run the disaster recovery operation.

  • Recovery plan in a Recovery incomplete state cannot be successfully completed

    If a failover with vMotion is interrupted during the vMotion step and the plan goes into Recovery interrupted state, all the following plan re-runs might fail at the Change recovery site storage to writable step. The error at this step is incorrect and Failover is successfully completed. However, the plan stays in Recovery incomplete state and cannot be switched back to Ready state because of this.

    Workaround: To successfully failback VMs to the primary site, recreate the Protection Groups and the Recovery Plan.

  • Reprotect fails with an error

    When you are replicating virtual machines at a large scale, reprotect might fail with the following error: "Unable to reverse replication for the virtual machine A generic error occurred in the vSphere Replication Management Server "java.net.SocketTimeoutException: Read timed out"

    Workaround: 

    1. Navigate to the /opt/vmware/hms/conf/hms-configuration.xml file.

    2. Increase the value of hms-default-vlsi-client-timeout to 15 minutes on both sites.

    3. Restart the HMS services.

  • Recovery plan with a vSphere Replication replicated virtual machine fails with an error

    If the vSphere Replication replicated virtual machine with MPITs has several replicated disks on several different datastores and you stop the replication of one disk and detach it from the protection group, the failover will fail with the following error "Invalid configuration for device '0'".

    Workaround: Do not stop the replication of one of the disks and do not detach the disk from the protection group.

  • Reprotect operation for a large scale of VMs fails with an error

    When you try to perform a reprotect operation for a large scale of VMs, the process might fail with one of the following erros:

    Unable to reverse replication for the virtual machine <VM_name>

    or

    A general system error occurred: Failed to open virtual disk

    These problems might be observed due to temporary storage overload or network issues.

    Workaround: Retry the reprotect operation for these VMs.

  • After performing disaster recovery and then powering on the same site some virtual machines go into orphaned state

    When powering on the down site after performing a Disaster Recovery in a Stretched Storage environment, some virtual machines might appear in an orphan state. The problem is observed for virtual machine protection groups in a Stretched Storage environment.

    Workaround: Remove the entries for all orphaned virtual machines from the vCenter Server inventory before running other Site Recovery Manager workflows.

  • When a replicated disk is on Virtual Volumes storage and is resized, the disk is recovered as Thin Provisioned regardless of original disk type

    The internal working of the disk resize operation involves making a copy of the disk, which due to the specifics of Virtual Volumes storage defaults to Thin Provisioned disk type regardless of the base disk type. The disk resize is completed but the resulting resized disk now has the Thin Provisioned type when recovered by vSphere Replication.

    Workaround: If required, you can change the disk type manually after recovery.

  • One or more replications go into Error (RPO violations) state after reprotect operation

    After you perform a reprotect operation, one or more of the replications go into error state with the following error:

    A problem occurred with the storage on datastore path '[<datastore-name>] <datastore-path>/hbrdisk.RDID-<disk-UUID>.vmdk

    Workaround:

    1. Remove the replication

    2. Configure the replication again, using seed disks.

  • Some of the recovered virtual machines throw the following alarm 'vSphere HA virtual machine failover failed'

    During a Site Recovery Manager workflow, post Test Recovery or Failover operations, some of recovered virtual machines might throw the following alarm: vSphere HA virtual machine failover failed. From Site Recovery Manager perspective, there is no functional impact as all virtual machines are recovered successfully.

    Workaround: None. You must acknowledge the alarm.

  • DNS servers are available in the network configuration of the Site Recovery Manager Appliance Management Interface, even if you selected static DNS without DNS servers

    When the requirements of the network settings are for No DNS servers but with automatic DHCP adapter configuration, the setting static DNS and DHCP in the adapter configuration results in DNS servers acquired from DHCP.

    Workaround: Use 127.0.0.1 or ::1 in the static DNS servers list, depending on the selected IP protocol.

  • Reprotect fails when using stretched storage on some storage arrays

    The command to reverse the replication on some devices is skipped intentionally when the devices are already in the expected state. As a result the storage array are not getting required notifications and this causes the reprotect operation to fail.

    Workaround:

    1. Navigate to the vmware-dr.xml file and open it in a text editor.

    2. Set the configuration flag storage.forcePrepareAndReverseReplicationForNoopDevices to true.

      <storage>
      <forcePrepareAndReverseReplicationForNoopDevices>true</forcePrepareAndReverseReplicationForNoopDevices>
      </storage>
    3. Save the file and restart the Site Recovery Manager server service.

  • Devices and Datastores information is missing during the failover of a recovery plan with array-based replication protection groups

    When you run a recovery plan failover, depending on the SAN type and whether it detaches the datastore from the host during recovery, the information in the Devices and the Datastores tabs might disappear during the failover process.

    Workaround: None. The information in both tabs appears again after a successful reprotect.

  • Updated - Customization through IP subnet mapping rules is not fully supported for Linux VMs using multiple NICs which are named ethX

    Site Recovery Manager does not fully support IP rule-based customization for Linux virtual machines that have multiple NICs, if the NICs have mixed DHCP and static IP settings. Site Recovery Manager customizes only the NICs with static IP addresses for which it has matching IP subnet mapping rule and might clear some configuration settings for the other NICs configured with DHCP. Known issue related to this scenario was observed for Red Hat Enterprise Linux 6.x/7.x and CentOS 6.x/7.x, where Site Recovery Manager customization deletes /etc/sysconfig/network-scripts/ifcfg-ethX files for the NICs configured with DHCP and successfully customizes the rest with static IP settings according to the matched IP subnet mapping rule. This issue also happens when the VM's NICs are all configured with static IP addresses, but some of them have a matching IP subnet rule while others do not. Some configuration settings for those NICs without a matching IP subnet rule might be cleared after IP customization.

    Workaround: For correct IP customization for Linux VMs using multiple NICs with some of them having a matching IP subnet mapping rule while the others do not, use the Manual IP Customization Site Recovery Manager option.

  • The Site Recovery UI becomes unusable showing a constant stream of 403 - OK error message

    The Site Recovery UI shows no data and an error 403 - OK.

    Workaround:

    1. Log out from Site Recovery UI and log in again.

    2. Disable the browser's 'Restore last session' checkbox. For Chrome disable the 'Continue where you left off' option.

  • Datastore cluster that consists of datastores that are not replicated or are from different consistency groups visible to Site Recovery Manager does not have an SRM warning.

    You create a datastore cluster that consists of datastoreas that are not all in a same consistency group or are not replicated. A Site Recovery Manager warning should exist but does not.

    Workaround: None

  • Your datastore might appear as inactive in the inventory of the original protected site after reprotect

    If you use a stretched storage and run reprotect after a disaster recovery, you might receive the following warning.

    The requested object was not found or has already been deleted.

    After reprotect, the datastore in the inventory of the original protected site appears as inactive.

    Workaround: Refresh or rescan the storage adapters.

    1. Click the Configure tab and click Storage Adapters.

    2. Click the Refresh or Rescan icon to refresh or rescan all storage adapters.

  • Recovery of an encrypted VM might fail during the Power On step if the encryption key is not available on the recovery site

    If you recover an encrypted VM and the encryption key used on the protected site is not available on the recovery site during the recovery process, the recovery fails when Site Recovery Manager powers on the VM.

    Workaround: Complete the following steps.

    1. Remove the encrypted VM from the inventory of the recovery site.

    2. Ensure that the Key Management Server on the recovery site is available and that the encryption key used on the protected site is available on the recovery site.

    3. Register the encrypted VM to the inventory of the recovery site.

    4. In the Site Recovery Manager user interface, open the recovery settings of the encrypted VM and disable power on of the VM during recovery.

    5. Rerun recovery.

  • Planned Migration might fail with an error for VMs protected on vSphere Virtual Volumes datastore

    If you have VMs protected on vSphere Virtual Volumes datastores, the planned migration of the VMs might fail with the following error on the Change recovery site storage to writable step.

    Error - Storage policy change failure: The vSphere Virtual Volumes target encountered a vendor specific error. Invalid virtual machine configuration. A specified parameter was not correct: path.

    Workaround: Rerun the recovery plan.

  • The IP customization or in-guest callout operations might fail with Error - Failed to authenticate with the guest operating system using the supplied credentials

    Workaround:

    When recovery.autoDeployGuestAlias option in Advanced Settings is TRUE (default).

    • If the time of the ESX host where the VM is recovered and running is not synchronized with vCenter Single Sign-On servers on the recovery site.

    • If the guest OS of the recovered VM is Linux and the time is ahead from the ESX host on which the recovered VM is running, update the configuration parameters of the VM by using the following procedure and rerun the failed recovery plan.

      1. Right-click the recovered VM.

      2. Click Edit Settings.

      3. In the Options tab, click General.

      4. Click Configuration to update the configuration parameters.

      5. Click Add Row and enter time.synchronize.tools.startup.backward in the Name text box and TRUE in the Value text box.

      6. Click OK to confirm.

    When the recovery.autoDeployGuestAlias option in Advanced Settings is FALSE.

    • Ensure proper time synchronization between your guest OS on the protected VM and vCenter Single Sign-On servers on the recovery site.

    • Ensure that your protected VMs have correct guest aliases configured for the Solution User on the recovery site SRM server. For more information see, the description of recovery.autoDeployGuestAlias option in Change Recovery Settings.

    For more information, see the related troubleshooting sections in the Site Recovery Manager 8.4 Administration guide.

  • Site Recovery Manager fails to track removal of non-critical virtual machines from the vCenter Server inventory, resulting in MONF errors in recovery, test recovery and test cleanup workflows.

    Site Recovery Manager loses connections to the vCenter Servers on the protected and recovery sites and cannot monitor removal of non-critical virtual machines.

    Workaround: Restart the Site Recovery Manager server.

  • Cleanup fails if attempted within 10 minutes after restarting recovery site ESXi hosts from maintenance mode.

    The cleanup operation attempts to swap placeholders and relies on the host resilience cache which has a 10 minute refresh period. If you attempt a swap operation on ESXi hosts that have been restarted within the 10 minute window, Site Recovery Manager does not update the information in the Site Recovery Manager host resiliency cache, and the swap operation fails. The cleanup operation also fails.

    Workaround: Wait for 10 minutes and attempt cleanup again.

  • Recovery Fails to Progress After Connection to Protected Site Fails

    If the protection site becomes unreachable during a deactivate operation or during RemoteOnlineSync or RemotePostReprotectCleanup, both of which occur during reprotect, then the recovery plan might fail to progress. In such a case, the system waits for the virtual machines or groups that were part of the protection site to complete those interrupted tasks. If this issue occurs during a reprotect operation, you must reconnect the original protection site and restart the recovery plan. If this issue occurs during a recovery, it is sufficient to cancel and restart the recovery plan.

  • Temporary Loss of vCenter Server Connections Might Create Recovery Problems for Virtual Machines with Raw Disk Mappings

    If the connection to the vCenter Server is lost during a recovery, one of the following events might occur:

    • The vCenter Server remains unavailable, the recovery fails. To resolve this issue re-establish the connection with the vCenter Server and re-run the recovery.

    • In rare cases, the vCenter Server becomes available again and the virtual machine is recovered. In such a case, if the virtual machine has raw disk mappings (RDMs), the RDMs might not be mapped properly. As a result of the failure to properly map RDMs, it might not be possible to power on the virtual machine or errors related to the guest operating system or applications running on the guest operating system might occur.

      • If this is a test recovery, complete a cleanup operation and run the test again.

      • If this is an actual recovery, you must manually attach the correct RDM to the recovered virtual machine.

    Refer to the vSphere documentation about editing virtual machine settings for more information on adding raw disk mappings.

  • Error in recovery plan when shutting down protected virtual machines: Error - Operation timed out: 900 seconds during Shutdown VMs at Protected Site step.

    If you use Site Recovery Manager to protect datastores on arrays that support dynamic swap, for example Clariion, running a disaster recovery when the protected site is partially down or running a force recovery can lead to errors when rerunning the recovery plan to complete protected site operations. One such error occurs when the protected site comes back online, but Site Recovery Manager is unable to shut down the protected virtual machines. This error usually occurs when certain arrays make the protected LUNs read-only, making ESXi unable to complete I/O for powered on protected virtual machines.

    Workaround: Reboot ESXi hosts on the protected site that are affected by read-only LUNs.

  • Planned migration fails with Error: Unable to copy the configuration file...

    If there are two ESXi hosts in a cluster and one host loses connectivity to the storage, the other host can usually recover replicated virtual machines. In some cases the other host might not recover the virtual machines and recovery fails with the following error: Error: Unable to copy the configuration file...

    Workaround: Rerun recovery.

  • Test cleanup fails with a datastore unmounting error.

    Running cleanup after a test recovery can fail with the error Error - Cannot unmount datastore 'datastore_name' from host 'hostname'. The operation is not allowed in the current state.. This problem occurs if the host has already unmounted the datastore before you run the cleanup operation.

    Workaround: Rerun the cleanup operation.

check-circle-line exclamation-circle-line close-line
Scroll to top icon