The release notes cover the following topics:
VMware Data Services Manager enables Enterprise organizations to host their own Database-as-a-Service offering on VMware-based infrastructure.
This minor 2.0.3 patch release fixes issues observed in previous 2.0.x releases of VMware Data Services Manager.
Upgrade to DSM 2.0.3
When you upgrade the DSM appliance from earlier releases to 2.0.3, you must perform additional steps and manually run a script to complete the upgrade. This script creates a standby vCenter service account that would help in service account password rotations.
After a successful upgrade, perform these additional steps to run the script:
SSH into DSM appliance.
Run the script with below command:
python3 /opt/vmware/tdm-provider/bin/create_svc_account.py
This script prompts for vCenter SSO credentials to create a standby service account.
Observe the script execution until you see the message:
Create alternating venter service accounts completed successfully.
This message indicates that the script has completed successfully.
Resolved Issues
VMware Data Services Manager 2.0.3 resolves the following issues:
Typically, VMware Data Services Manager automatically rotates the vCenter service account password before it expires. However, the database workloads continued to use the old password, causing the account to be locked. When the service account is locked, plugin authentication and other related vCenter operations fail.
The VMware Data Services Manager 2.0.3 release has addressed this issue. As a result, the workloads will properly reconcile the change in the service account password.
In previous releases of VMware Data Services Manager, when database clusters were deleted, the underlying storage claims were not cleaned up. Starting from the 2.0.3 release, this issue has been resolved.
In previous releases, when you were creating infrastructure policies, port groups did not appear on the Network Port Groups pane if more than 100 port groups were configured in vCenter. This problem has been resolved.
In previous releases, the permitted operations on DB clusters with disabled
release versions were too limited. Starting with 2.0.3, the only restriction is creating new clusters. This applies to all supported data services, including PostgreSQL, MySQL, AlloyDB.
Known Issues
Edit operations fail on Alloy DB after disabling DSR during an upgrade
If Alloy DB DSR was disabled during the upgrade from 2.0.2 to 2.0.3, edit operations on Alloy DB fail.
Workaround: Re-enable the Alloy DB DSR.
If you fail to run additional steps required when upgrading the DSM appliance from earlier releases to 2.0.3, system disruptions might occur
The same problems occur when the service account created in 2.0.2 has already expired.
You can observe the following symptoms:
vCenter DSM plugin is not working.
In the DSM console, the Create Database page shows this error: At least one infrastructure policy must be enabled and in a ready state before creating a database.
In the DSM console, the Infrastructure Policy > Summary page shows the Password of the user logging on is expired
error.
DSM System Audit page shows the following:
Component as SERVICE_ACCOUNT
EventType as PASSWORD_ROTATION
EventDetails as Failed to rotate active service account
Workaround:
If you didn't run the script after the DSM appliance upgrade to 2.0.3, perform the following steps.
If the service account created in 2.0.2 expired, upgrade to DSM 2.0.3 and perform these steps.
SSH into DSM appliance.
Run the following command to create the standby service account:
python3 /opt/vmware/tdm-provider/bin/create_svc_account.py
This script will prompt for vCenter SSO credentials to create a standby service account.
Observe the script execution until you see the message:
Create alternating venter service accounts completed successfully
This indicates that the script has completed successfully.
Wait for the next rotation attempt by the system, which is scheduled at 00:00 UTC time.
After the rotation, the clusters and the DSM appliance will start using the new account, and the problems should be resolved.
VMware Data Services Manager 2.0.2 introduces the following enhancements:
For updates and changes in documentation, see Updated Information.
Upgrade Considerations
Resolved Issues
VMware Data Services Manager 2.0.2 resolves the following issues:
Enabling a Data Service fails if DSM detects that the configured Infrastructure Policies use port groups that do not have unique names within the Datacenter. This issue typically occurs in NSX environments.
2.0.2 release introduces a solution that helps you avoid this issue. When you use APIs to create an Infrastructure Policy in the environment where NSX manages port groups with the same names, use port group MOIDs. If you create Infrastructure Policies through the UI, DSM references port groups by their MOIDs.
Domain name that ends with .local
is not resolved by DSM Provider.
Note: This problem has been resolved in 2.0.2. However, if you configure external/database-backup
storage and FQDN ends with .local
, node DNS Config can't resolve it. See Known Issues.
You cannot use duplicate $
in the Provider VM root password during deployment or when modifying the Provider VM root password through the DSM UI.
Regex validator issue for email on the Create Permission DSM UI page. Email like [email protected]
is showing as invalid email in the create permission flow.
Prevent the admin from configuring Provider S3 repository with insufficient permissions.
Problems with the Database Options section of the DSM UI page. The issues include the following:
Plugin authentication failures on vCenter servers configured with SSO domain other than vsphere.local
.
Plugin registration and telegraf service failures caused by system generated service account password having special characters.
VMware Data Services Manager 2.0.1 introduces the following enhancements:
Resolved Issues
If you configure the Provider Repo with an endpoint that has a CA-signed certificate, enabling Data Services fails.
Enablement of the Data Services remains in In Progress
when a vCenter standard switch with VM Network
portgroup does not exists.
VMware Data Services Manager version 2.0 introduces the following new features and enhancements:
Modified architecture - In 2.0, VMware Data Services Manager has been re-architected to provide tighter integration with vSphere and vSAN. In addition, VMware Data Services Manager now offers declarative Kubernetes APIs for the vSphere administrator, the DSM administrator, and the DSM users.
DSM installation as a vSphere plugin - VMware Data Services Manager version 2.0 removes the infrastructure management component from the DSM console and creates a vSphere DSM UI plugin. With the DSM plugin, the vSphere administrator can use the vSphere Client to configure, manage, and monitor the database infrastructure.
Self service layer - Developers are able to self serve databases from tooling they are familiar with starting with Aria Automation and Kubernetes.
Tenancy has been moved out of VMware Data Services Manager to the self-service layer - Customers with Aria Automation can integrate with DSM and leverage projects. In addition, customers who are leveraging Kubernetes for their developers, can install a consumption operator that will enable a K8s experience for the developers while providing management via DSM.
New declarative API - The infrastructure and database management components are now available through a K8s declarative API making it a familiar standard experience for anyone preferring to use the API.
Removed agents - Agents are no longer needed. If installed in vCenter, DSM can create databases in any of the vSphere clusters without installing any additional components.
Infrastructure policies - VMware Data Services Manager version 2.0 introduces a concept of infrastructure policies. These policies are configured by a vSphere administrator, who selects clusters, resource pools, storage policies that map to any of the underlying storage available on that cluster, and other components of the infrastructure policies. Infrastructure policies allow the infrastructure team to proactively determine where databases will be created and then monitor those environments as they are created.
VM classes - VMware Data Services Manager version 2.0 offers VM classes that specify the compute and memory resources allotted to a provisioned database VM. Default VM classes are available, but you can also configure custom VM classes.
User management through the vSphere Client - A vSphere administrator can create and manage DSM admin and DSM user roles through the DSM plugin in the vSphere Client.
IP pools - Internal DNS has been replaced with a static IP. Each database and VM is now assigned IP addresses from an IP pool from within the infrastructure policy. During actions that require recreating the VM, rolling upgrades will be performed. As a result, at least one more IP address than actively used must be present in the pool.
Removed the DSM controlled DNS server - To improve resilience of the databases, the DSM managed DNS server has been removed. If DNS is still required, the database is provisioned with an IP address that can be used inside a customer DSN server.
The following table identifies the supported component versions for VMware Data Services Manager version 2.0.x:
Component | Supported Versions |
---|---|
vCenter | 7.0.3i and later |
ESXi | 7.0 and later |
VMFS | 5 and 6 |
PostgreSQL | 12.17, 13.13, 14.10, and 15.5 |
MySQL | 8.0.29, 8.0.31, 8.0.32, 8.0.34 |
The DSM 2.0.x release includes the following software component versions:
Component | Version |
---|---|
alpinedb-instance | v2.2.1-dev.163.g1c184ae1 |
alpinedb-operator | v2.2.1-dev.163.g1c184ae1 |
antrea | v1.11.2_vmware.1 |
cert-manager-cainjector | v1.12.2_vmware.1 |
cert-manager-controller | v1.12.2_vmware.1 |
cert-manager-webhook | v1.12.2_vmware.1 |
cloud-provider-vsphere | v1.26.2_vmware.1 |
coredns | v1.10.1_vmware.7 |
csi-attacher | v4.3.0_vmware.2 |
csi-livenessprobe | v2.10.0_vmware.2 |
csi-node-driver-registrar | v2.8.0_vmware.2 |
csi-provisioner | v3.5.0_vmware.2 |
csi-snapshotter | v6.2.2_vmware.2 |
etcd | v3.5.7_vmware.6 |
fluent-bit | v2.1.6_vmware.1 |
kapp | v0.48.2_vmware.1 |
kube-apiserver | v1.26.8_vmware.1 |
kube-controller-manager | v1.26.8_vmware.1 |
kube-proxy | v1.26.8_vmware.1 |
kube-scheduler | v1.26.8_vmware.1 |
kube-vip | v0.5.7_vmware.2 |
kubernetes-csi_external-resizer | v1.8.0_vmware.2 |
Kubernetes node OS | ubuntu 20.04 |
mysql-operator | 1.11.0-rc.1-42-g3ed396f8 |
pause | 3.9 |
postgres-operator | v2.2.1-dev.162.gda548a37 |
telegraf | 1.29.0 |
telegraf-chart | 1.1.1 |
volume-metadata-syncer | v3.1.2_vmware.1 |
vsphere-block-csi-driver | v3.1.2_vmware.1 |
VMware Data Services Manager version 2.0.x has the following limitations:
VMware Data Services Manager does not support OpenLDAP.
VMware Data Services Manager does not support any form of multi-vCenter deployment, such as Enhanced Linked Mode (ELM), Hybrid Linked Mode (HLM), or cloud linked mode.
Topology limitations:
MySQL is single server only.
Postgres can have multi-node topology however that topology is currently limited to a single vSphere cluster.
Scaling limitations:
You cannot scale down vertically because a VM class cannot be changed to a smaller one and disk space cannot be made smaller.
Horizontal scaling is not possible because as you cannot change from a higher to lesser node topology.
Certificates limitations:
VMware Data Services Manager 2.0.x has the following known issues:
New If you configure external/database-backup storage and FQDN ends with .local
, node DNS Config can't resolve it
As a result, database clusters might be stuck in an IN-Progress state.
Workaround: Avoid using FQDNs ending with .local
when configuring the storage repo.
Domain name that ends with .local is not resolved by DSM Provider (Resolved in 2.0.2)
DSM Provider's resolver, which is systemd-resolved, doesn't resolve domain names that end with .local using the configured DNS server.
Workaround:
Add the dns record to /etc/hosts
.
Run the following command:
sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
Selecting Automatically power on deployed VM
checkbox during the DSM installation causes failures of the installation process
This problem is caused by a known issue in vCenter.
Workaround: When installing the DSM plugin, leave the Automatically power on deployed VM
checkbox unchecked. For more information, see Deploying the VMware Data Services Manager Plugin in vSphere Client.
AlloyDB Columnar Engine doesn't work properly
You can observe failures in database logs when data is added to columnar engine following AlloyDB documentation.
The AlloyDB Tech Preview deployment in DSM 2.0 isn't enabled by default and the database instance configuration doesn't allocate required shared memory.
Workaround: To use the AlloyDB Columnar Engine in DSM 2.0, add these two properties to the custom database configuration:
google_columnar_engine.enabled = 'on'
dynamic_shared_memory_type = 'mmap'
You should take into account the potential negative performance impact of dynamic_shared_memory_type = 'mmap'
described in the Postgres documentation.
If admin username is "postgres", the Postgres DB creation remains in "InProgress" state
When you create a Postgres DB and choose as an admin username
"postgres", the default and current name for the Postgres "initial" user, DB creation indefinitely remains in "InProgress" state. The same problem is likely to occur with the built-in roles listed in Predefined Roles.
Workaround: When creating a Postgres database, avoid choosing "postgres" or any name in Predefined Roles for admin username
.
If an invalid custom database option is applied, performing further changes to the Postgres cluster might cause problems
You can observe the problems when you follow these steps. Create a Postgres cluster without any custom database option configured and wait for the cluster to become Ready. Then edit the database options by adding or changing to an invalid option. Postgres cluster status will be updated to reflect that an invalid option is applied, and a WARNING alert will be triggered. However, if you perform another operation, such as scale up, the cluster might remain in In Progress state forever.
Workaround: You must first fix the invalid database option before applying any further changes to the Postgres cluster.
Data services releases are not visible in the Version & Upgrade tab after successful release processing
This problem can occur when the data services are not downloaded fully.
Workaround:
Download system logs from the UI.
Search for the error Source and destination checksums for data plane bundle are not matching
.
If this error is found, redeploy the provider appliance.
Trigger release processing and verify if the data services releases are available in Version & Upgrade post release processing.
If the problem persists, connect with support team for further resolution.
Operations/tasks remain stuck InProgress due to new desired state set
In this 2.0 release, there is no synchronization and compatibility validation for clusters mutating operations.
Workaround: Wait until cluster modification operations complete before initiating new operations.
After PostgreSQL cluster backup location change, the status of lastSuccessfulBackup
and Wal archival
are not immediately updated and might show stale information
The backup location change triggers new initial backup on the new location and it might take time until the operation succeeds and backup status is refreshed. The proper behavior would be reset of the status fields until the new status is confirmed.
Workaround: After changing the backup location, wait for the Postgres cluster to display the Ready state, and check its lastSuccessfulBackup
property.
If this property does not change in a timely manner, investigate the root cause. Note that the time to create new backup on the new location depends on the size of the database.
To investigate, search for ConfigurePgbackrestTask
in the logs contained within the log bundle, nodename podlogs.tgz archive /
.
Databases cluster provisioning does not work on datacenter different from the datacenter used in the first configured infrastructure policy
If the first infrastructure policy created by the vSphere admins uses one vCenter's datacenter but then another infrastructure policy is created for a different datacenter (from the vCenter's inventory tree), then the second infrastructure policy cannot be used for provisioning database workloads.
Workaround: The vSphere admins must configure all infrastructure policies to use the same datacenter object from the vCenter's inventory tree.
After the successful creation of the database, a singular occurrence of the error message VFS: Can't find ext4 filesystem
is displayed on the console of the DB VM
While creating the database cluster, Kubernetes CSI attempts to mount the database filesystem before it becomes ready, resulting in the displayed error message. This behavior is anticipated and does not have impact on the database cluster.
Workaround: None.
When using kubectl get -w
against DSM Kubernetes API, the process might fail to start or reflect the resource events properly
Due to an internal configuration, kubectl watch events are not being propagated correctly.
Workaround: None.
Modified When vCenter resources are included in a folder, problems with infrastructure policies or resource pools might occur
DSM 2.0 does not support nested folder structure of the vCenter resources. When vCenter resources are located under a folder, you can observe different problems. For example, infrastructure policies associated with these resources are shown as invalid. When you attempt to create infrastructure policies, you cannot see resource pools of host clusters that belong to a folder.
Workaround: Avoid using folders for the vCenter resources. If you have a host cluster in a folder, place it directly under the datacenter.