After deploying the newly scaled VMware Blockchain nodes, you must generate the configuration for the new and existing nodes, and bind these newly created nodes to the VMware Blockchain topology.

Note:

The new node reconfiguration is available up to 45 minutes. You must finalize your reconfiguration within the time frame for the new deployment to succeed.

Prerequisites

  • Verify that all the newly deployed Replica and the Client nodes are available and running.

    Note:

    If a Client node is unavailable before the scale-up process, that node is no longer be able to participate in the new blockchain configuration. You must reconfigure the Client node to scale up. See Scale-Up Deployed VMware Blockchain Nodes on AWS.

  • If the Concord operator containers were deployed, verify that the Concord operator container is running. See Instantiate the Concord Operator Container for AWS.

Procedure

  1. Navigate to the /home/blockchain/descriptors directory.
  2. Create a reconfigure descriptor JSON file and set the parameter values in the descriptor directory.

    The reconfigure descriptor file contains combined details of the nodes from the output files of the original blockchain nodes and scaled nodes. The file semantics and structure must adhere to the rules of a reconfiguration descriptor.

    Note:

    You must add the operator specification keys used to the reconfiguration descriptor.

    Values such as zoneName, privateIp, and nodeId are available in the original VMware Blockchain Orchestrator appliance output directory, /home/blockchain/output.

    Sample reconfigure_deployment_descriptor.json file.

    {
        "populatedReplicas": [
            {
                "zoneName": "zone-A",
                "privateIp": "172.147.12.233",
                "nodeId": "59cc46ce-0354-4a0b-920d-54def5afedc0",
                "keyName": "vmbc_system_test"
            },
            {
                "zoneName": "zone-A",
                "privateIp": "172.147.12.221",
                "nodeId": "11082f75-4186-438d-8d88-2a8a450c482e",
                "keyName": "vmbc_system_test"
            },
            {
                "zoneName": "zone-A",
                "privateIp": "172.147.12.120",
                "nodeId": "e9dcc7c8-7369-4843-812c-89bdcd628fde",
                "keyName": "vmbc_system_test"
            },
            {
                "zoneName": "zone-A",
                "privateIp": "172.147.12.141",
                "nodeId": "726b490c-90f3-49b5-bbd6-550ee135421a",
                "keyName": "vmbc_system_test"
            },
            {
                "zoneName": "zone-A",
                "privateIp": "172.147.12.163",
                "nodeId": "07a5cb19-e7f0-4257-baef-ee91897e4e7b",
                "keyName": "vmbc_system_test"
            },
            {
                "zoneName": "zone-A",
                "privateIp": "172.147.12.201",
                "nodeId": "0c4c466a-4be6-4aaf-885e-f0e6fbe12c74",
                "keyName": "vmbc_system_test"
            },
            {
                "zoneName": "zone-A",
                "privateIp": "172.147.12.175",
                "nodeId": "cf8cf261-be08-433b-92e8-2f61a547be21",
                "keyName": "vmbc_system_test"
            }
        ],
       "populatedClients": [
        {
    	"tlsLedgerData": {
    		"crt": "...",
    		"pem": "...",
    		"cacrt": "...",
    		"clientAuth": "REQUIRE"
        },
    	"keyName": "vmbc_system_test",
    	"privateIp": "172.147.12.174",
    	"clientGroupId": "94ef186a-3931-4158-9645-eb7791798fcb",
    	"groupName": "Group0",
    	"damlDbPassword": "dD_h6Tm6wmMY3-4",
    	"zoneName": "zone-A",
    	"nodeId": "7d1f8d34-8d48-4f37-8262-06071043891a"    
        }
        ],
        "operatorPublicKey": ""-----BEGIN PUBLIC KEY-----
              \nMFkwEwYHKoZ\n
             -----END PUBLIC KEY-----\n"
        },
        "tags": {
            "Name": "ScaleBlockchainTests"
        },
        "blockchain": {
            "consortiumName": "consortium",
            "blockchainType": "DAML",
            "blockchainId": "5be21cf3-a1c5-46a9-82bf-e3e85dea3a52"
        },
        "replicaNodeSpec": {
            "instanceType": "m4.2xlarge",
            "diskSizeGib": 64
        },
        "clientNodeSpec": {
            "instanceType": "m4.2xlarge",
            "diskSizeGib": 64
        }
    }
  3. Encrypt and redirect the infrastructure and the deployment descriptor files for added security.
    1. Encrypt the infrastructure_descriptor.json file.
      $HOME/descriptors > ansible-vault encrypt infrastructure_descriptor.json
      New Vault password:
      Confirm New Vault password:
      Encryption successful
    2. Encrypt the deployment_descriptor.json file.
      $HOME/descriptors > ansible-vault encrypt deployment_descriptor.json
      New Vault password:
      Confirm New Vault password:
      Encryption successful
    3. Configure the two environment variable values.
      • ORCHESTRATOR_OUTPUT_DIR - The output directory where the output file is written.

      • ORCHESTRATOR_DEPLOYMENT_TYPE - Set deployment type to RECONFIGURE.

    4. Run the secure-orchestrator.sh script from the orchestrator_runtime directory.
      ORCHESTRATOR_OUTPUT_DIR=$HOME/output 
      ORCHESTRATOR_DEPLOYMENT_TYPE=RECONFIGURE 
      ./secure-orchestrator.sh

      The script creates temporary files.

      • /dev/shm/orchestrator-awsIGoa0JA/infra_descriptor

      • /dev/shm/orchestrator-awsIGoa0JA/deployment_descriptor

    5. Redirect the decrypted infrastructure_descriptor.json to the infrastructure_descriptor file location.

      Use the vault password used to encrypt the infrastructure_descriptor.json file.

      ansible-vault view $HOME/descriptors/infrastructure_descriptor.json > /dev/shm/orchestrator-awsIGoa0JA/infra_descriptor
    6. Redirect the decrypted deployment_descriptor.json to the deployment_descriptor file location.

      Use the vault password used to encrypt the deployment_descriptor.json file.

      ansible-vault view $HOME/descriptors/deployment_descriptor.json > /dev/shm/orchestrator-awsIGoa0JA/deployment_descriptor

      After the script completes running, the temporary files are deleted.

    7. (Optional) If the script fails or the secure_orchestrator.sh script is terminated, delete the temporary folder under the /dev/shm/orchestrator-* directory.
  4. Run the VMware Blockchain Orchestrator reconfigure script.
    ORCHESTRATOR_DESCRIPTORS_DIR=/home/blockchain/descriptors 
    ORCHESTRATOR_OUTPUT_DIR=/home/blockchain/output 
    INFRA_DESC_FILENAME=infrastructure_descriptor.json
    DEPLOY_DESC_FILENAME=reconfigure_deployment_descriptor.json  
    ORCHESTRATOR_DEPLOYMENT_TYPE=RECONFIGURE 
    docker-compose -f docker-compose-orchestrator.yml up

    The new node reconfiguration ID is available in the reconfiguration output file.

  5. Identify the reconfiguration ID in the <output-directory> that was created when you redeployed VMware Blockchain Orchestrator.

    Reconfiguration Id: 626fdcfc-9c9c-4c25-b210-9e71c471e3cb

  6. Stop all the applications that invoke the connection requests to the Daml Ledger in the original deployment.
  7. Stop the original Client nodes in the Client group.
    curl -X POST 127.0.0.1:8546/api/node/management?action=stop
  8. Verify that all the containers except the agent, CRE, and deployed Concord operator container are running on the selected Client node.
    curl -X POST 127.0.0.1:8546/api/node/management?action=stop
    docker ps -a

    If the docker ps -a command shows that some containers, with the exception agent and deployed operator container, are still running, rerun the command or use the docker stop <container_name> command to stop the containers.

    docker start cre
    docker run -ti --network=blockchain-fabric --name=operator --entrypoint /operator/install_private_key.py --rm -v /config/daml-ledger-api/concord-operator:/operator/config-local -v /config/daml-ledger-api/concord-operator:/concord/config-public -v /config/clientservice/tls_certs:/config/clientservice/tls_certs -v /config/daml-ledger-api/config-public:/operator/config-public <OP_IMAGE_ID>
    
    curl -X POST 127.0.0.1:8546/api/node/start-operator
  9. Log into the Client node running the operator node configuration.
    docker exec -it operator bash
    ./concop scale --clients execute --configuration d5c28fc4-4c58-4864-88bb-219c16f4609b  --tokens   "{ \"53a061d5-1109-45b8-9938-20259aaef8c4\" : \"U2FsdGVkX1/gvoD7/gjnjuU+peYd8ycSOiZRUYJYzDBtPbxBpwxmuYZ9NclBX7GrJgWghKqychJjGCol4telgg3pCOjDrM6eS1bt+L4/5aAupPKu2bDiKT9l/2lLiTU7\", \"5abcd2cb-da6e-4d41-a77a-6a7ab3d4ef9d\" : \"U2FsdGVkX1/u2ITU2Z2/G8WqbKRsYoIW5DC95VgS0XTztIu81J885LbrsrAKEmDQfQvCtnPuXElWsKdK7SeRAb0XwTY5O/X2IxK9kmXP8i9p3VqDYBg+ECGQn3IeOV64\", \"7cc9cddf-15a7-4514-89ec-6f387542dd31\" : \"U2FsdGVkX18A0C8V60bd6u+i8dCPiH6poyRPfEivEqLnTXMqyP8YmaVWS3N2P3AIHzlAfznSGAte57yah9YAOSzvBJvk9+dkZ+GdpkhmwWH3+eppS5RM3m5Q3GLIu775\",\" 92e75b0b-1f8e-4851-bc90-93a56a099b40\" : \"U2FsdGVkX18u+xDk+QvfHWJlgqfKjgv0UDpTrNG3hZYL5HRb3VNz1RtnYp96mwSQMaFpyLK/qvXCS8nKqYw9z1bJY6PqkIe3ixbMk4GSLX0SubERSkCxRiBlsZW5O+Cb\",\"fd052a82-f984-45b2-ba56-390076534cef\" : \"U2FsdGVkX1/up52xhqAeE5lnfqpm6lD96kiBa0wuTGB5guSA3QxlPWQ1i/1LTd8cjGPpPlNNpqgo7cQ1PsCQmnPDjVVhbJkxpkJtH0lYt32b/MqiKfxIMcw7QScetbDI\",\"7e6936f5-a955-4ed4-9ee8-af9ceb20d8a6\" : \"U2FsdGVkX1+u+XiUSWfctLntS/kpHpfqFk9DbQrYfUdsJjvu4SuT6cLOwtSizzJO5saaZiyNwnw3UIfv0uHP6H8oSEBEMZo8RewQWIJcHCcYonRufpgZFz95vM+yAKX+\",\"bfd7d591-1894-49ed-831e-670b5ce3ee50\" : \"U2FsdGVkX19vYTPf+3xf8apsl/x3nykvXZeFjFF+qVfmatwSmFlg6YA62mzgpH8KnJknyoI42lU04tT9ypR2v/gylOAeOzETpj89YttAO1N/Q5rHW7z4xYvPllxbGQID\",\"17f647a3-e7ab-4975-9b89-64cb2723d759\" : \"U2FsdGVkX19bQ70oCV/LzMqMD99+O+16wckyFNjyyJGI9nCrDUrwPU2SO0/UeeDPUK5zUZatuEB+SYS8anpadvdT7ppLlLIB5nbFYKvd0vqeNS8DGo5v17n1nCVdUbHe\",\"7b13682a-ea04-435c-86aa-3b0b8972806b\" : \"U2FsdGVkX18SNmXNTWSCsQdi1eLyZkF9cHVj4yeSWCtiVrYQ3Yqa7ZOmctkP6z+OGQruNX2drm1eK38//FF4IseWtTLsB+KqCM5yXTOLO+7Sg1kjBvRtZS9HREfNpZmq\"}"
    ./concop scale --replicas execute --configuration d5c28fc4-4c58-4864-88bb-219c16f4609b --tokens   "{ \"53a061d5-1109-45b8-9938-20259aaef8c4\" : \"U2FsdGVkX1/gvoD7/gjnjuU+peYd8ycSOiZRUYJYzDBtPbxBpwxmuYZ9NclBX7GrJgWghKqychJjGCol4telgg3pCOjDrM6eS1bt+L4/5aAupPKu2bDiKT9l/2lLiTU7\", \"5abcd2cb-da6e-4d41-a77a-6a7ab3d4ef9d\" : \"U2FsdGVkX1/u2ITU2Z2/G8WqbKRsYoIW5DC95VgS0XTztIu81J885LbrsrAKEmDQfQvCtnPuXElWsKdK7SeRAb0XwTY5O/X2IxK9kmXP8i9p3VqDYBg+ECGQn3IeOV64\", \"7cc9cddf-15a7-4514-89ec-6f387542dd31\" : \"U2FsdGVkX18A0C8V60bd6u+i8dCPiH6poyRPfEivEqLnTXMqyP8YmaVWS3N2P3AIHzlAfznSGAte57yah9YAOSzvBJvk9+dkZ+GdpkhmwWH3+eppS5RM3m5Q3GLIu775\",\" 92e75b0b-1f8e-4851-bc90-93a56a099b40\" : \"U2FsdGVkX18u+xDk+QvfHWJlgqfKjgv0UDpTrNG3hZYL5HRb3VNz1RtnYp96mwSQMaFpyLK/qvXCS8nKqYw9z1bJY6PqkIe3ixbMk4GSLX0SubERSkCxRiBlsZW5O+Cb\",\"fd052a82-f984-45b2-ba56-390076534cef\" : \"U2FsdGVkX1/up52xhqAeE5lnfqpm6lD96kiBa0wuTGB5guSA3QxlPWQ1i/1LTd8cjGPpPlNNpqgo7cQ1PsCQmnPDjVVhbJkxpkJtH0lYt32b/MqiKfxIMcw7QScetbDI\",\"7e6936f5-a955-4ed4-9ee8-af9ceb20d8a6\" : \"U2FsdGVkX1+u+XiUSWfctLntS/kpHpfqFk9DbQrYfUdsJjvu4SuT6cLOwtSizzJO5saaZiyNwnw3UIfv0uHP6H8oSEBEMZo8RewQWIJcHCcYonRufpgZFz95vM+yAKX+\",\"bfd7d591-1894-49ed-831e-670b5ce3ee50\" : \"U2FsdGVkX19vYTPf+3xf8apsl/x3nykvXZeFjFF+qVfmatwSmFlg6YA62mzgpH8KnJknyoI42lU04tT9ypR2v/gylOAeOzETpj89YttAO1N/Q5rHW7z4xYvPllxbGQID\",\"17f647a3-e7ab-4975-9b89-64cb2723d759\" : \"U2FsdGVkX19bQ70oCV/LzMqMD99+O+16wckyFNjyyJGI9nCrDUrwPU2SO0/UeeDPUK5zUZatuEB+SYS8anpadvdT7ppLlLIB5nbFYKvd0vqeNS8DGo5v17n1nCVdUbHe\",\"7b13682a-ea04-435c-86aa-3b0b8972806b\" : \"U2FsdGVkX18SNmXNTWSCsQdi1eLyZkF9cHVj4yeSWCtiVrYQ3Yqa7ZOmctkP6z+OGQruNX2drm1eK38//FF4IseWtTLsB+KqCM5yXTOLO+7Sg1kjBvRtZS9HREfNpZmq\"}" 

    The new reconfiguration ID is available in the reconfiguration output file.

  10. Verify the new configuration on the Replica nodes.
    docker exec operator sh -c './concop scale --replicas status'
    {"10.72.95.242":{"bft":false,"configuration":"2147b337-1edd-4857-be84-67e662415370","restart":false,"wedge_status":true},"10.72.95.243":{"bft":false,"configuration":"2147b337-1edd-4857-be84-67e662415370","restart":false,"wedge_status":true},"10.72.95.244":{"bft":false,"configuration":"2147b337-1edd-4857-be84-67e662415370","restart":false,"wedge_status":true},"10.72.95.245":{"bft":false,"configuration":"2147b337-1edd-4857-be84-67e662415370","restart":false,"wedge_status":true}}
    docker exec operator sh -c './concop scale --clients status'
    {"10.72.95.242":[{"UUID":"25b5b049-290f-406f-8c5a-0c43d1979505","configuration":"2147b337-1edd-4857-be84-67e662415370"}],"10.72.95.243":[{"UUID":"25b5b049-290f-406f-8c5a-0c43d1979505","configuration":"2147b337-1edd-4857-be84-67e662415370"}],"10.72.95.244":[{"UUID":"25b5b049-290f-406f-8c5a-0c43d1979505","configuration":"2147b337-1edd-4857-be84-67e662415370"}],"10.72.95.245":[{"UUID":"25b5b049-290f-406f-8c5a-0c43d1979505","configuration":"2147b337-1edd-4857-be84-67e662415370"}]}
  11. Verify that all the original Replica nodes are wedged at the same block.
    # sudo image=$(docker images --format "{{.Repository}}:{{.Tag}}" | grep "concord");docker run -it --rm --entrypoint="" --mount type=bind,source=/mnt/data/rocksdbdata,target=/concord/rocksdbdata $image /concord/kv_blockchain_db_editor /concord/rocksdbdata getLastBlockID
    { "lastBlockID": "47197" }
    # sudo image=$(docker images --format "{{.Repository}}:{{.Tag}}" | grep "concord");docker run -it --rm --entrypoint="" --mount type=bind,source=/mnt/data/rocksdbdata,target=/concord/rocksdbdata $image /concord/kv_blockchain_db_editor /concord/rocksdbdata getLastReachableBlockID
    { "lastReachableBlockID": "47197" }
  12. Back up the Client nodes data.
    1. On the Client node from the initial deployment, create a /mnt/data/db/ archive.
    2. Transfer the archive to the new Client nodes and extract the data.
      #on a client node from the original deployment:
      sudo tar cvzf </path/><bckp-name> /mnt/data/db
      sudo scp -r </path/><bckp-name> vmbc@<the-client-ip-to-be-added>:/config
       
      #on the newly created client node:
      sudo tar -zxvf </path/><bckp-name> -C /
  13. Back up and restore the RocksDB data from the original Replica node to the new Replica nodes.
    Note:

    Verify that the Concord container is stopped before you perform metadata cleanup on all the Replica nodes.

  14. Update the agent configuration for all the newly created Replica and Client nodes.
    sudo sed -i '/COMPONENT_NO_LAUNCH\|SKIP_CONFIG_RETRIEVAL/d' /config/agent/config.json
    sudo sed -i 's/inactive/<new-configuration-id>/g' /config/agent/config.json

    The new configuration ID is available in the reconfiguration output file.

  15. Confirm the new configuration in the /config/agent/config.json file.
    ....
    "configurationSession": {
        "id": "2147b337-1edd-4857-be84-67e662415370"
    },
    "outboundProxyInfo": {
    },
    "nodeId": "c78bd97a-17d7-49b4-8590-11a85df9dd2f",
    "properties": {
       "values": {
          "NEW_DATA_DISK": "True"
       }
    }
    ....
  16. Download the new configuration on the new Replica and Client nodes.
    curl -ik -X POST http://localhost:8546/api/node/reconfigure/<new-configuration-id>
  17. Restart original Replica nodes.
    # curl -ik -X POST -H "content-type: application/json" --data '{ "containerNames" : ["all"] }' http://localhost:8546/api/node/restart
    HTTP/1.1 201 CreatedDate: Tue, 15 Feb 2022 09:14:22 GMTX-Content-Type-Options: nosniffX-XSS-Protection: 1; mode=blockCache-Control: no-cache, no-store, max-age=0, must-revalidatePragma: no-cacheExpires: 0X-Frame-Options: DENYContent-Length: 0
  18. Start the containers on the scaled Replica nodes.
    #start scaled replicascurl -X POST 127.0.0.1:8546/api/node/management?action=start
  19. Remove the Concord operator container on the original Client node and the CRE container.
    #remove operator
    docker rm -f operator
    operator
    #stop cre
    docker stop cre
    cre

    If the Client node was stopped using the curl -X POST 127.0.0.1:8546/api/node/management?action=stop command, start the Client node using this curl -X POST 127.0.0.1:8546/api/node/management?action=start command.

  20. Restart the agent on all the blockchain nodes.
    docker restart agent
  21. Verify that all the existing and newly scaled nodes appear in Wavefront.
  22. Verify that the following metrics indicate that your blockchain network is operating properly.
    Option Description

    Metrics

    Description

    Blocks per second metrics

    All the blockchain nodes must process blocks because time blocks are constantly being added. The nodes should be a positive number to be considered in a healthy state.

    FastPaths

    All blockchain nodes must report in a fast path, and none reporting in a slow path. When the Blocks per second metrics indicate an unhealthy state, the wedge status is always false until all the nodes have stopped at the same checkpoint.