VMware Blockchain 1.7 | 08 SEP 2022 | Build 55

Check for additions and updates to these release notes.

What's New

VMware Blockchain is an enterprise-grade blockchain platform that meets the needs of business-critical multi-party workflows. The VMware Blockchain 1.7 release includes the following enhancements:

Security Improvements

Symmetric Key Protection

Private keys and sensitive configuration information on each VMware Blockchain node are encrypted with a symmetric key. This symmetric key can be stored on a blockchain node or on a software implementation of the Trusted Platform Module 2.0 (TPM 2.0) standard known as Virtual Trusted Platform Modules (vTPM), supported by VMware vSphere. The enhanced option saves the symmetric key in a NIST FIPS 140-2 level 3 compliant USB HSM token device.

Ledger API TLS Key Rotation

System administrators can adhere to cryptographic best practices by regularly rotating the TLS key that encrypts the connection between the Daml Ledger API and the Client node.

Performance Improvements

Storage Layer

Major performance improvements in the VMware Blockchain storage layer result in enhanced reading and writing capabilities for higher throughput transactions per second (TPS) and almost immediate pruning of large amounts of keys without system downtime.

Distributed Tracing Tool

Allows users to track the complete progress cycle of requests through the system from the Daml Ledger API into the Replica node and returns to the Daml Ledger API. By sampling multiple requests, users can troubleshoot performance issues and locate possible bottlenecks without performance degradation to the VMware Blockchain platform.

CloudWatch Monitoring Metrics Filtering

Users deploying VMware Blockchain on AWS incurred high costs even if they did not utilize all the metrics available in the CloudWatch dashboard. With this release, users can filter the CloudWatch metrics per their requirements and optimize costs based on their consumption.  

Recoverability Enhancements

Replica Node Automatic Recovery

Enhanced Replica node recovery mechanism synchronizes the Replica nodes after the nodes encounter downtime, improving the Recovery Time Objective (RTO) of Replica nodes after failure. The improved RTO, especially with high-transaction-volume workloads, enables the system to quickly recover full fault tolerance without manual intervention.

Restoring a Blockchain from the Full Copy Client

In case of catastrophic failure scenarios, the VMware Blockchain nodes can be recovered to their latest state using the data safely stored in the ObjectStore attached to the Full Copy Client. The data stored on the ObjectStore comprises cryptographically signed checkpoints. These checkpoints provide proof of origination and tamper-detection capabilities, guaranteeing that the restored blockchain is an exact copy of the original. In addition, recovering data using this method provides minimal data loss because the Full Copy Client maintains synchronization with the Replica Network state as long as the blockchain is live.

Component Versions

The supported domain versions include:

Domain Version
VMware Blockchain Platform 1.7
VMware Blockchain Orchestrator 1.7
DAML SDK 2.2.1

The VMware products and solutions discussed in this document are protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. and its subsidiaries in the United States and other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.

Upgrade Considerations

Implement the clone-based upgrade process only when upgrading from VMware Blockchain 1.6 to 1.7. See the Perform Clone-Based Upgrade on vSphere or Perform Clone-Based Upgrade on AWS instructions in the Using and Managing VMware Blockchain Guide.

Resolved Issues

  • New - Daml index_db fails to start on some Client node VMs with less than 32GB of memory

    Under certain circumstances, when a blockchain deployment is scaled up to seven Replica nodes and restarted, the Client nodes are not operational if there are less than 32GB of memory. This problem occurs because the Daml index_db container does not have sufficient memory and fails to start.

  • New - Concord container fails after a few days of running because batch requests cannot reach pre-processing consensus

    In rare cases, if one of the batch requests has not reached the pre-execution consensus, the entire batch is canceled. When one of the batch requests is in the middle of pre-processing, it cannot be canceled, and users must wait until the processing completes. This missing validation causes the Concord container process to fail.

  • New - State Transfer delay hinders Replica nodes from restarting and catching up with other Replica nodes

    The State Transfer process is slow due to storage and CPU resource shortage. As a result, a restarted Replica node cannot catch up with other Replica nodes and fails.

  • New - Fluctuations in transactions per second (TPS) might destabilize some Client nodes

    In some cases, fluctuations in TPS might be observed for some Client nodes after the blockchain ran for several hours. After that, the load stabilizes and continues with slight drops in TPS.

  • New - Primary Concord container fails after a few hours or days of run

    In rare cases, the information required by the pre-processor thread gets modified by the consensus thread, which causes the Concord container process to fail.

  • New - Due to a RocksDB size calculation error, the oldest database checkpoints are removed even when adequate disk space is available

    A known calculation error in the database checkpoints causes over-estimation of RocksDB size resulting in the oldest database checkpoints being removed to ensure RocksDB internal operations do not fail because of insufficient disk space. However, database checkpoint removal is not required because adequate disk space is available.

  • New - Large state snapshot requests to create a checkpoint time out

    The Daml Ledger API sends state snapshot requests to the database Checkpoint manager to create a checkpoint. The checkpointing takes 18 seconds or more for large databases, and this delay causes a timeout.

  • New - Assertion fails on State Transfer source while sending messages to a State Transfer destination

    Οn a rare occasion, when two destination Replica nodes request blocks with overlapping ranges, prefetch capability is enabled on the source.

    For example, when a destination Replica-node-1 requests blocks between 500 and 750, the source prefetches blocks 751-800. When another destination, Replica-node-2, requests blocks between 751 and 900, the source prefetch is considered valid, and the assertion fails while sending blocks to destination Replica-node-2.

  • New - Client nodes cannot resynchronize data when the data size is greater than 5GB

    Client nodes cannot resynchronize data from the Replica nodes when the data is greater than 5GB, and the data folder is removed due to data corruption. Therefore, any .dar file uploads cause the Client node Daml Ledger API to fail.

Known Issues

  • New - Replica node might fail due to a synchronization issue in the preprocessor

    In some instances, the pre-process request handling in the primary Replica node might be interrupted by a PreProcessReplyMsg message for the same request causing invalid processing and leading to failure.

    Workaround: None

  • New - Concord container gets restarted due to excessive memory usage

    When a Concord container process reaches the maximum memory limit, the process is stopped, which causes the container to restart. The problem is likely to occur in high-load transaction scenarios running for several days.

    Workaround: None

  • New - When running VMware Blockchain with a maximum supported 50 transactions per second (TPS) for DAML with constrained compute resources for Replica node VMs, the system might exhibit fluctuations in the transaction processing rate

    Concord container request processing goes through multiple stages, and the fluctuations might be related to the pre-processing phase. In the case of a computationally constrained environment, the raw memory pool component might add an unexpected overhead to the overall execution time, thus resulting in TPS rate fluctuations.

    Workaround: To reduce the fluctuations in the transaction processing rate, complete the following tasks to deactivate the raw memory pool component.

    1. Log into each Replica node VM.
    2. Open the application.config file.
    3. Change the value of the enable_memory_pool_in_preprocessor parameter to false.
    4. Restart the Concord container process.
  • New - Required Paramiko module is missing from the VMware Blockchain Orchestrator appliance

    The VMware Blockchain Orchestrator appliance uses the Paramiko module to generate blockchain node and Concord container logs to identify errors and debug problems. The missing module causes the support bundle collection to fail.

    Workaround: Install the required Paramiko module on the VMware Blockchain Orchestrator appliance before running the support bundle collection script. 

    sudo yum install -y paramiko

  • Small form factor for Client node groups in AWS deployment is not supported

    With the introduction of client services, a Client node requires 32GB minimum memory allocation. The small form factor uses the M4.xlarge instance type, which provides 16GB of memory. 

    Workaround: Update the clientNodeSpec parameter values with M4.2xLarge in the deployment descriptor for AWS deployments that require smaller provisions.

  • Wavefront metrics do not appear after the network interface is deactivated and re-enabled on Replica nodes

    This problem was observed while executing a test case that explicitly deactivated the network interface on Replica nodes and rarely manifested in a production environment.

    Workaround: Restart the telegraf container by running the command, docker restart telegraf for the metrics to appear in Wavefront.

check-circle-line exclamation-circle-line close-line
Scroll to top icon