VMware Blockchain 1.2.0.2 | 17 JUN 2021 | Build 106 Check for additions and updates to these release notes. |
What's in the Release Notes
The release notes cover the following topics:What's New
VMware Blockchain is an enterprise-grade blockchain platform that meets the needs of business-critical multi-party workflows. This patch release includes two fixes:
Fixed a problem that misreported DuplicateKey errors as InconsistentKey errors
This problem occurred when transactions attempted to create a contract with a contract key used by another contract instead of a unique contract key. When a transaction was rejected, the error message incorrectly displayed an InconsistentKey error.
Fixed a problem that caused the Client node failure and subsequent restart loop
This problem occurred after a party initially witnessed the contract create-event with a contract key, for which it was not a stakeholder. This scenario is valid in certain situations see, DAML Ledger Model. When the party receives another contract with the same contract key, the duplicate contract key conflict triggers the Client node failure, subsequent restart loop, and stops processing transactions from the Concord container.
Component Versions
The supported domain versions include:
Domain | Version |
---|---|
VMware Blockchain Platform |
1.2.0.2.106 |
VMware Blockchain Orchestrator |
1.2.0.1.91 |
DAML SDK |
1.11.2 |
The VMware products and solutions discussed in this document are protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. and its subsidiaries in the United States and other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
Known Issues
- Upgrade from VMware Blockchain version 1.1 to 1.2 requires special consideration
Upgrade from VMware Blockchain version 1.1 to 1.2 not available.
Workaround: Contact VMware Blockchain support to discuss considerations for an upgrade.
- Concord containers crash after a long-running continuous load
In certain situations during a continuous-running load when using the expression Assert: expression '!isFetching(), Concord containers might exit, causing the Blockchain nodes to be down and out of the quorum.
Workaround: Restart the Concord containers that crashed, wait for the state transfer to complete, and for the Blockchain nodes to rejoin the quorum.
- VMware Blockchain does not support a specific version of the ELK stack
Environment-specific configurations might be required for your default ELK stack settings to ensure that the collected metrics information is accurate.
Workaround: Contact your VMware Blockchain technical specialist for assistance with ELK stack configurations.
- Time across the Replica and Client nodes becomes inaccurate if the NTP service is down or not synchronized
If the NTP service is down or not synchronized, the time across Replica and Client nodes might become inaccurate, leading to data discrepancies or cause errors in the DAML Ledger API.
Workaround: To avoid any DAML Ledger API errors and data discrepancies, you must keep the NTP service up and synchronized to ensure that all the servers running VMware Blockchain reflect the accurate time.
- In rare instances, the Concord containers are unable to start after failing due to metadata inconsistency
Replica nodes might crash, leaving an inconsistent state between blocks and metadata. Replica nodes recover when the mismatch only affects a single block. In workloads when the mismatch spans multiple blocks and block accumulation is disabled, the Replica node fails to start with the following error message in the logs: "Detected more than one block needs to be deleted from the blockchain - unsupported"
Workaround: Complete the following steps:
- Log in to the affected Replica node.
- Run docker concord start to fix the mismatched state.
- Repeat the start process repeats until all the mismatched blocks are fixed.
Every start attempt, the Replica node fixes one mismatched block and exits with an error message. After the last mismatched block is removed, the Replica node starts normally and catches up with the other nodes using state transfer.
- Replica nodes crash during pruning of large data
In certain circumstances, when there are many thin replica clients, there is high activity on the TRS gRPC channel resulting in a memory consumption problem that causes the Replica nodes to crash during the pruning operation.
Workaround: Complete the following steps:
- Stop the daml_ledger_api and daml_index_db containers on all the Client nodes.
- Wait until the pruning operation is complete.
- Start the daml_ledger_api and daml_index_db containers on all the Client nodes.
- When using only Splunk as a metrics endpoint, the Jaeger agent fails with an error message
While using only Splunk as a metric endpoint, the Jaeger agent attempts to connect to a non-existent wavefront-proxy container and fails to connect with a wavefront-proxy connection error message.
Workaround: You can safely ignore the error message and log entries.