To clone VMware Blockchain nodes, you must create a snapshot of Replica or Client nodes secondary EBS or storage volume on the AWS console.

Prerequisites

Procedure

  1. Stop the Client node components.
    curl -X POST 127.0.0.1:8546/api/node/management?action=stop
    vmbc@localhost [ ~ ]# curl -X POST 127.0.0.1:8546/api/node/management?action=stop root@localhost [ ~ ]# sudo docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 218a1bdaddd6 vmwaresaas.jfrog.io/vmwblockchain/operator:1.7.0.0.55  "/operator/operator_…" 18 hours ago Up 18 hours operator cd476a6b3d6c vmwaresaas.jfrog.io/vmwblockchain/agent:1.7.0.0.55  "java -jar node-agen…" 18 hours ago Up 18 hours 127.0.0.1:8546->8546/tcp agent vmbc@localhost [ ~ ]#
  2. Pause all the Replica nodes at the same checkpoint from the operator container and check the status periodically until all the Replica nodes' status is true.

    Any blockchain node or nodes in state transfer or down for other reasons cause the wedge status command to return false. The wedge status command returns true when state transfer completes and all Replica nodes are healthy, allowing all Replica nodes to stop at the same checkpoint successfully.

    Wedge command might take some time to complete. The metrics dashboards indicate nodes that have stopped processing blocks as they have been wedged. If you notice a false report in the dashboard, contact the VMware Blockchain support to diagnose the Replica nodes experiencing the problem.

    ./concop wedge stop
    # Stop all replicas on the next next checkpoint {'additional_data': 'set stop flag', 'succ': True} or {'succ': False}
     
    ./concop wedge status
    # Check the wedge status of the replicas list
     
    Keep trying the status command periodically until all replicas return true.
  3. Check that all the Replica nodes are stopped in the same state.

    Verifying that the LastReacheableBlockID and LastBlockID sequence number of each Replica node stopped helps determine if any nodes lag.

    If there is a lag when you power on the Replica Network, some Replica nodes in the state-transfer mode might have to catch up. Otherwise, it can result in a failed consensus and require restoring each Replica node from the latest single copy.

    sudo docker run -it --rm --entrypoint="" --mount type=bind,source=/mnt/data/rocksdbdata,target=/concord/rocksdbdata <ImageName> /concord/kv_blockchain_db_editor /concord/rocksdbdata getLastBlockID
    sudo docker run -it --rm --entrypoint="" --mount type=bind,source=/mnt/data/rocksdbdata,target=/concord/rocksdbdata <image_name> /concord/kv_blockchain_db_editor /concord/rocksdbdata getLastReachableBlockID

    The <image_name> is the Concord-core image name in the blockchain.

    vmwaresaas.jfrog.io/vmwblockchain/concord-core:1.7.0.0.55

  4. In the EC2 interface, select the VMware Blockchain node from the Amazon EC2 page and navigate to the Storage tab.
  5. Select the data volume ID, navigate to the EBS volumes, and select Actions > Create Snapshot.

    This step creates a snapshot of the EBS volume you can use for restoring your data.

  6. Save the snapshot ID.
  7. Set the PERFORM_CONCORD_METADATA_CLEANUP in the infrastructure descriptor file to True for cloning.

    See the Advanced Features Parameters for a sample configuration.

  8. Add the snapshot ID in the deployment descriptor file for cloning.

    See the Clone Replica and Client Node Parameters for a sample configuration.

  9. Run the VMware Blockchain Orchestratorcloning script.
    ORCHESTRATOR_DESCRIPTORS_DIR=/home/blockchain/descriptors
    INFRA_DESC_FILENAME=infrastructure_descriptor_clone.json
    DEPLOY_DESC_FILENAME=deployment_descriptor_clone.json 
    ORCHESTRATOR_OUTPUT_DIR=/home/blockchain/output 
    ORCHESTRATOR_DEPLOYMENT_TYPE=CLONE
    docker-compose -f docker-compose-orchestrator.yml up
  10. Change the COMPONENT_NO_LAUNCH parameter in the /config/agent/config.json file to False on all the Replica and Client nodes.
    sudo sed -i 's/"COMPONENT_NO_LAUNCH": "True"/"COMPONENT_NO_LAUNCH": "False"/g' /config/agent/config.json
    
  11. Restart the agent.
    sudo docker restart agent