VMware Blockchain 1.2.0.3 | 07 DEC 2021 | Build 129 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:What's New
VMware Blockchain is an enterprise-grade blockchain platform that meets the needs of business-critical multi-party workflows. This patch release includes the following fixes:
Fixed a problem that occurred when the Daml Ledger API commands failed during command interpretation due to contention on Contract Key with an error message, Could not find a suitable ledger time after 0 retries
The fix enables the Daml Ledger API to automatically retry the command interpretation multiple times in instances when a contention error is detected before sending the transaction to Concord, which significantly reduces the likelihood of the error.
Fixed a problem where the Concord application failed and caused service disruptions
The problem caused the Concord application to fail, leading to service disruptions such as failed transactions and throughput degradation. The problem occurred due to incorrect handling of network sockets in the Diagnostic server, which caused stack corruption. The implemented fix modifies the socket handling in the Diagnostic server to prevent any potential stack corruption.
Component Versions
The supported domain versions include:
Domain | Version |
---|---|
VMware Blockchain Platform |
1.2.0.3 |
VMware Blockchain Orchestrator |
1.2.0.1 Build 91 |
DAML SDK |
1.11.3 |
The VMware products and solutions discussed in this document are protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. and its subsidiaries in the United States and other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
Known Issues
- Upgrade from VMware Blockchain version 1.1 to 1.2 requires special consideration
Upgrade from VMware Blockchain version 1.1 to 1.2 is not available.
Workaround: Contact VMware Blockchain support to discuss considerations for an upgrade.
- Concord containers crash after a long-running continuous load
In certain situations during a continuous-running load when using the expression Assert: expression '!isFetching(), Concord containers might exit, causing the Blockchain nodes to be down and out of the quorum.
Workaround: Restart the Concord containers that crashed, wait for the state transfer to complete, and the Blockchain nodes to rejoin the quorum.
- VMware Blockchain does not support a specific version of the ELK stack
Environment-specific configurations might be required for your default ELK stack settings to ensure that the collected metrics information is accurate.
Workaround: Contact your VMware Blockchain technical specialist for assistance with ELK stack configurations.
- Time across the Replica and Client nodes becomes inaccurate if the NTP service is down or not synchronized
If the NTP service is down or not synchronized, the time across Replica and Client nodes might become inaccurate, leading to data discrepancies or causing errors in the DAML Ledger API.
Workaround: To avoid any DAML Ledger API errors and data discrepancies, you must keep the NTP service up and synchronized to ensure that all the servers running VMware Blockchain reflect the accurate time.
- In rare instances, the Concord containers are unable to start after failing due to metadata inconsistency
Replica nodes might crash, leaving an inconsistent state between blocks and metadata. Replica nodes recover when the mismatch only affects a single block. In workloads when the mismatch spans multiple blocks and block accumulation is disabled, the Replica node fails to start with the following error message in the logs: Detected more than one block needs to be deleted from the blockchain - unsupported.
Workaround: Complete the following steps:
- Log in to the affected Replica node.
- Run docker concord start to fix the mismatched state.
- Repeat the start process repeats until all the mismatched blocks are fixed.
Every start attempt, the Replica node fixes one mismatched block and exits with an error message. After removing the last mismatched block, the Replica node starts normally and catches up with the other nodes using state transfer.
- Replica nodes crash during pruning of large data
In certain circumstances, when there are many thin replica clients, there is high activity on the TRS gRPC channel resulting in a memory consumption problem that causes the Replica nodes to crash during the pruning operation.
Workaround: Complete the following steps:
- Stop the daml_ledger_api and daml_index_db containers on all the Client nodes.
- Wait until the pruning operation is complete.
- Start the daml_ledger_api and daml_index_db containers on all the Client nodes.
- When using only Splunk as a metrics endpoint, the Jaeger agent fails with an error message
While using only Splunk as a metric endpoint, the Jaeger agent attempts to connect to a non-existent wavefront-proxy container and fails to connect with a wavefront-proxy connection error message.
Workaround: You can safely ignore the error message and log entries.