VMware Cloud Director Object Storage Extension 2.2.3.1 | 27 JUN 2024 | Build 24036531 Check for additions and updates to these release notes. |
VMware Cloud Director Object Storage Extension 2.2.3.1 | 27 JUN 2024 | Build 24036531 Check for additions and updates to these release notes. |
This release resolves CVE-2024-22276. For more information on this vulnerability and its impact on VMware by Broadcom products, see VMSA-2024-0015.
You can upgrade directly to VMware Cloud Director Object Storage Extension 2.2.3.1 from versions 2.X. See Upgrading VMware Cloud Director Object Storage Extension.
From VMware Cloud Director Object Storage Extension 2.2.3, operating systems, such as CentOS 7, are no longer supported. Verify that your operating system is supported before installing or upgrading to VMware Cloud Director Object Storage Extension 2.2.3.
If you plan to install VMware Cloud Director Object Storage Extension on a new operating system, you can migrate your existing configuration by using the ose config export
and ose config import
commands.
When you access a tenant organization as a cloud provider, you can see only local resources
When you access a tenant organization, you can see only the local resources of this organizaiton. When you open the VMware Cloud Director Object Storage Extension Dashboard or Buckets page, you can see and select only the local organizations.
S3 API requests authenticated with application credentials do not support the following use cases:
Accessing a shared bucket if another user grants you permissions for the bucket.
Deleting multiple objects simultaneously with a single API request.
Copying objects from buckets that you own.
If you are using ECS storage, you cannot remove object tags.
When you try to remove an object tag, the operation fails with an error.
VMware Cloud Director and the underlying storage systems have different limitations on user names. To use VMware Cloud Director Object Storage Extension, user names must comply with both the requirements of VMware Cloud Director and the underlying storage system. A best practice is to use short user names (under 50 bytes) and to use alphanumeric characters.
If you are using Cloudian storage, the maximum length of user IDs is 255 bytes.
If you are using Dell ECS 3.4 or earlier, the maximum length of user IDs is 91 bytes.
If you are using Dell ECS 3.6, the maximum length of user IDs is 64 bytes.
Bucket synchronization supports up to 10 million objects per a single synchronization job
When the cloud provider enables bucket synchronization for a tenant in the provider portal, the synchronization can support up to 10 million objects for the tenant. VMware Cloud Director Object Storage Extension 2.1 does not support the synchronization for more than 10 million objects per a single bucket synchronization job.
If you are using ECS storage, S3 API, or the Find a Bucket feature, you cannot visit a bucket that belongs to a different tenant organization in the ECS platform.
All documentation is available on the VMware Cloud Director Object Storage Extension Documentation page.
Deleting an object from an existing bucket after upgrading to VMware Cloud Director Object Storage Extension version 2.2.3, fails with an error
If you upgrade to VMware Cloud Director Object Storage Extension version 2.2.3, then try to delete an object from an existing bucket, the process fails with the following error:
Failed to exchange user info between Cloud Director and storage platform.
The issue is observed if the tenant user who attempts the operation has a user name that contains special characters.
Workaround:
Navigate to the Postgres Database that VMware Cloud Director Object Storage Extension uses.
In the table bucket_info
, in the storage_user_id
for buckets column, add the encoded tenant user name.
You can find the encoded user name in the table platform_user_mapping
, when you select the platform_user_id
corresponding to user_name
.
Backing up an entire cluster fails
When you try to back up a Kubernetes cluster, where a pod contains persistent volumes in the primary node, the process enters a partially failed
status.
Workaround: Activate Scheduling Pods in the Kubernetes Control plane primary nodes by running the following commands:
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
The Kubernetes cluster protection status remains as Restoring
After performing a restore task in the target Kubernetes cluster, the cluster protection status remains as Restoring. The problem might occur when VMware Cloud Director Object Storage Extension continues to monitor the status of the restoring task and the state of the task remains in an InProgress state.
Workaround: Manually delete the restoring task.
Get the name of the "InProgress" restore task by running the following command:
velero -n velero-09ad8e66-1841-4933-ad50-162170ed0ae7 restore describe
Delete the restore task by name by running the following command:
velero -n velero-09ad8e66-1841-4933-ad50-162170ed0ae7 restore delete {restoreName}
It takes a few moments for the cluster protection status to return to its normal state.
The S3 service of VMware Cloud Object Storage Extension is unavailable
When you start or view VMware Cloud Object Storage Extension, the VMware Cloud Object Storage Extension service is active, but the S3 service is unavailable, with the following error message in the log file:
S3_TOKEN_AUTH_ERROR
The issue is observed if the time gap between the S3 client and the VMware Cloud Object Storage Extension VM is over 20 seconds.
Workaround 1: Change the gap time between the S3 client and the VMware Cloud Object Storage Extension VM to less than 20 seconds, for example, NTP for the VMware Cloud Object Storage Extension VM.
Workaround 2: Run the command oss.s3.request-expire-time=3600
and restart the VMware Cloud Object Storage Extension service.
Region metrics on the provider portal's tenant onboarding page does not distinguish region specific metrics data
With multi-region deployment, when multiple regions are activated for a tenant organization, active region cards show the global consumption metrics, not region-specific data. The problem is observed, because region specific metrics is not supported yet.
Workaround: None.