Storage and network bandwidth requirements might increase when using the trim/unmap Guest OS commands with vSphere Replication. You might also observe RPO violations.
Incremental Sync After Using Guest OS Trim/Unmap Commands
Calling the trim/unmap commands might increase the storage consumption on the target site.
After using the trim/unmap commands on the source site disk, the free space available on the disk is added to the data blocks that vSphere Replication transfers to the target site during the next RPO cycle. As a result, when the source site disk is less full, the size of the changed blocks that are transferred to the target site is larger.
For example, if the source site disk is 10 TB, and only 1 TB is allocated, calling the trim/unmap commands results in a transfer of at least 9 TB to the target site.
If the source site disk is 10 TB, 9 TB of which are allocated, and if you delete 2 TB of data, calling the trim/unmap commands results in a transfer of at least 3 TB of data to the target site.
Because of the incremental sync and depending on the RAID configuration defined by the VM storage policy at the target site, the storage consumption by the replicated VM can be more than two times as high as the consumption by the source VM.
You can't see the storage consumption by the replicated VM at the target site. You can only see the overall consumption of the entire vSAN datastore. So, you can't track the reclaimed storage space at the VM disk level, but you can track it by looking at the overall free space left on the vSAN datastore.
Recovery Point Objective Violations After Using the Trim/Unmap Commands on the Source Virtual Machine
You can call the trim/unmap commands manually or they can be called by the guest OS at certain intervals of time. In both cases, the synchronization after the command might take a significant amount of time.
The usage of the trim/unmap commands to reclaim the unused space on the source virtual machine might generate a large number of changed disk blocks. The synchronization of these changes might take longer than the configured RPO, and vSphere Replication starts reporting RPO violations.
Since the replication is behind the RPO schedule, to synchronize the changed disk blocks, a new incremental sync begins as soon as the synchronization of the previous instance completes. This process of immediate subsequent incremental syncs continues until vSphere Replication creates a replica instance that satisfies the RPO schedule, and does not report an RPO violation. The replication status becomes OK.