VMware Cloud Director 10.3.1 | 14 OCT 2021 | Build 18738799 (installed build 18738571) Check for additions and updates to these release notes. |
What's in this Document
- What's New
- System Requirements and Installation
- Documentation
- Previous Releases of VMware Cloud Director 10.3.x
- Resolved Issues
- Known Issues
What's New
- NSX-T Data Center Support
- Support for L2 VPN in the VMware Cloud Director user interface. See L2 VPN for NSX-T Data Center Edge Gateways.
- Support for certificate-based IPsec L3 VPNs. This includes a new user experience for certificate management and allocation.
- DHCP Relay. Tenants in VMware Cloud Director deployments that are backed by NSX-T Data Center can add a DHCP Relay to an edge gateway. When a guest OS transmits a DHCP request soliciting a lease for an IP address and related metadata, the NSX-T edge gateway receives the request and relays it to a designated DHCP server as a unicast flow. When the response is received by the relay, it is forwarded to the requesting guest OS.
- DHCP Binding provides a mechanism to persist DHCP lease information for one or more guest OS so that the IP address and related metadata is fixed until the administrator explicitly releases the lease or changes the setting back to dynamic.
- Simplified management of the VMware Cloud Director system backup and restore procedure by introducing a new appliance management UI and API that allows the automation of the restore process. See Backup and Restore of VMware Cloud Director Appliance.
Note: The VMware Cloud Director 10.3 and earlier backups are incompatible with VMware Cloud Director 10.3.1 and later. If you do not expect to restore a system to version 10.3 or earlier, you can delete these backups and their directory. The directory is located at /opt/vmware/vcloud-director/data/transfer/pgdb-backup. - Providers and tenants can generate API tokens for use in automation in VMware Cloud Director. This allows users who previously authenticated using their respective security best practices, including leveraging two-factor authorization, to grant access for building automation against VMware Cloud Director. See Generate an Access Token.
- The Kubernetes Container Clusters plug-in is updated to version 3.1.0, which includes support for Tanzu Kubernetes Grid. For more information on VMware Cloud Director Container Service Extension support of VMware Tanzu Kubernetes Grid see https://vmware.github.io/container-service-extension/cse3_1/RELEASE_NOTES.html.
- Role-Based Access Control Enhancements
- Tenant roles now include the right
View CPU and Memory Reservation
. If deactivated, the user does not see Organization VDC level CPU or Memory reservation in the VDC tile view. The reservation numbers are also not available through the VMware Cloud director API. This allows the cloud providers to choose not to show tenants the resource reservation configured for the organization VDCs. By default, theView CPU and Memory Reservation
right is enabled to preserve existing behavior after upgrade to this release. See Control Tenant Access to Resource Reservation Information. - The right
Administrator Control
no longer manages user creation and management. This release introduces theManage users and groups
right to provide tenant roles with VM and vApp management access with the right to create and manage users and operations.
- Tenant roles now include the right
- Serviceability
- You can use the
/metrics
API endpoint to monitor the health of the VMware Cloud Director system. The output data is in the Prometheus format that many common tools use. This helps providers with visibility into the health of the VMware Cloud Director system. - Transfer Server Storage Monitoring. VMware Cloud Director integrates monitoring of the shared transfer storage used by the VMware Cloud Director cells. The system alerts system administrators when the transfer share reaches its allocated capacity. You can configure notifications to appear when the available storage space reaches a certain threshold. You can use the
/metrics
API endpoint to retrieve information about the transfer server storage. See Enable and Configure Transfer Server Storage Monitoring.
- You can use the
System Requirements and Installation
For information about system requirements and installation instructions, see VMware Cloud Director 10.3 Release Notes.
Supported LDAP Servers
Note: VMware Cloud Director 10.3 and later supports Windows Server 2019 as a platform for the LDAP Service.
You can import users and groups to VMware Cloud Director from the following LDAP services.
Platform | LDAP Service | Authentication Methods |
---|---|---|
Windows Server 2012 | Active Directory | Simple, Simple SSL |
Windows Server 2016 | Active Directory | Simple, Simple SSL |
Windows Server 2019 | Active Directory | Simple, Simple SSL |
Linux | OpenLDAP | Simple, Simple SSL |
Documentation
To access the full set of product documentation, go to VMware Cloud Director Documentation.
Previous Releases of VMware Cloud Director 10.3.x
VMware Cloud Director 10.3 Release Notes
Resolved Issues
- New Updating the CIDR for a stretched network fails with a Cannot update network with new subnet because it does not overlap all allocated ips error message
In the VMware Cloud Director tenant portal, when you attempt to update the CIDR for a stretched network, the operation fails with an error message.
Cannot update network with new subnet because it does not overlap all allocated ips
This happens because the Edit modal incorrectly allows the network CIDR field to be edited.
- New VMware Cloud Director spikes in the CPU consumption causes a system slowdown
After opening and closing a large number of VM consoles over a period of time, the CPU consumption spikes and causes slow cell performance.
- New The VMware Cloud Director UI displays the Power On option as greyed out for an empty vApp
If you create an empty vApp, when you attempt to power on the vApp, the UI displays the Power On option as greyed out and you cannot start the vApp.
- vApps containing powered off VMs and one or more VMs in Failed creation status incorrectly appear as Powered off
If a vApp includes powered off VMs and one or more VMs that have the Failed creation status, the status of the vApp incorrectly appears as Powered off instead of Unresolved.
- Auto scale rule stops working
24 hours after you configure an auto scaling rule, the auto scaling service looses connection to VMware Cloud Director and the rule that triggers the growing or shrinking of scale groups stops working.
- When using the VMware Cloud Director Service Provider Admin Portal with Firefox, you cannot load the tenant networking screens
If you are using the VMware Cloud Director Service Provider Admin Portal with Firefox, the tenant networking screens, for example, the Manage Firewall screen for an organization virtual data center, might fail to load. This issue happens if your Firefox browser is configured to block Third-Party cookies.
- In the VMware Cloud Director tenant portal, increasing the vCPU of a VM does not update the CPU shares
If a VDC has an allocation pool set as the allocation model, increasing the vCPU of a VM does not update the CPU shares.
- When turning Alpha features on or off, the VMware Cloud Director UI displays a message that tenants are not exposed to the Alpha features.
When you activate or deactivate the VMware Cloud Director Alpha features, on the confirmation window, the UI displays a message that
Alpha features are not exposed to Tenant users.
However, when Alpha features are active, all users experience the API login changes and all users with the necessary rights can deploy TKGs clusters. - If you try to use the VMware Cloud Director API to move a vApp across vCenter Server instances when the target datastore is vSAN based, the MoveVApp API fails with an internal server error
When using the
/vdc/action/moveVApp
API, if the destination is in a different vCenter Server instance and the target datastore is vSAN based, the move fails with an internal server error. - After upgrading to vCenter Server 7.0 Update 2a or Update 2b, you cannot create Tanzu Kubernetes Grid clusters
If the underlying vCenter Server version is 7.0 Update 2a or Update 2b, when you try to create a Tanzu Kubernetes Grid cluster by using the Kubernetes Container Clusters plug-in, the task fails.
- Users with the General Administrator View right but without the Access All Organization VDCs right cannot view any VMs in the tenant organization
If you grant a user the General Administrator View right but not the Access All Organization VDCs right, the user cannot view any VMs in the tenant organization.
- Creating a distributed firewall rule configured with a stretched network as the source fails with an error message
When attempting to create a distributed firewall rule, if you configure a stretched organization VDC network as the source, the create operation fails with an error message.
Distributed Firewall rule <Firewall-name> has an invalid specification.
- Import a VM from vCenter Server as a vApp results in an error message
In an organization with three VDCs (VDC-1, VDC-2, VDC-3) backed by two vCenter Servers (VC-A and VC-B), where VC-A backs VDC-1, and VC-B backs VDC-2 and VDC-3, if you configure two organization networks with the same name in VDC-1 and VDC-2, and VDC-2 shares its organization network with VDC-3, an attempt to import a VM from VC-A as a vApp into VDC-3, results in a
Failed
status. Under Task Details you see the following error messages.
Unable to start vApp
Could not find backing for network
. - Powering on a VM fails with a No compatible host has sufficient resources to satisfy the reservation error message
In a provider virtual data center that has more than one resource pools, when you attempt to power on a VM, the operation fails with an error message.
No compatible host has sufficient resources to satisfy the reservation.
This happens because the placement engine places the VM on a resource pool with insufficient memory resources.
- Creating a vApp from a vApp template deploys the new vApp with an incorrect name
After deploying a vApp from a vApp template, VMware Cloud Director assigns the new vApp the template name instead of the name you provide during the creation.
- Adding a VM to a vApp fails with a Requested disk iops 0 for virtual machine exceeds maximum allowed iops error message
In a vApp, if you assign the virtual disk of a VM to a vCenter Server storage policy with enabled IOPS setting, attempting to add to the same vApp a new VM, that is assigned to a VMware Cloud Director storage policy with enabled IOPS setting ails with an error message.
Requested disk iops 0 for virtual machine exceeds maximum allowed iops
- You cannot sort vApps by lease date
In the VMware Cloud Director tenant portal, when you attempt to sort vApps by the vApp lease date, the resulting table does not filter the vApps correctly.
- Migrating a VM between vApps across VDCs backed by different vSphere data centers fails with an error message
If two vApps are backed by different vSphere data centers, when you attempt to move a VM from one of them to the other, the operation fails with a
Failed to migrate the VM
error message.
The log file display the following error.
The operation is not supported on the object
- Adding a NIC to a VM fails with an Operation failed because no suitable resource was found error message
If a shared named disk that you attach to a VM, resides on a datastore cluster, attempting to add a NIC to this VM fails with an error message.
Operation failed because no suitable resource was found
- Some VM and vApp operations fail with a <DomainName> should not be provided when using Org settings error message
If you enable a VM to join a domain, updating this VM and using it to deploy a new VM or a vApp fails with an error message.
<DomainName> should not be provided when using Org settings
- An attempt to view the scale groups results in a Cannot read property 'name' of undefined error message
If you delete a VM template that is in a scale group, an attempt to view the scale groups fails with an error message.
Cannot read property 'name' of undefined
- Reauthentication to VMware Cloud Director by using a SAML user fails with a Single sign-on failed for this organization error message
If you log to VMware Cloud Director by using a SAML user configured to time out after more than 2 hours, when the VMware Cloud Director session expires and you try to reauthenticate by using the same SAML session, the operation fails with an error message.
VMware Cloud Director SSO Failure. Single sign-on failed for this organization.
- The HTML5 UI does not display the computing resources graphics for a flex and pay-as-you-go organization VDCs
On the Virtual Data Center dashboard screen, the data center cards do not display the graphics for the compute resources consumption.
- Renaming a vApp fails with a The Operation is denied error message
When you attempt to rename a vApp by using a user with Edit vApp Properties rights, the operation fails with an error message.
The Operation is denied
This happens because VMware Cloud Director requires additional Edit VM Network rights.
- Creating and updating an organization VDC networks fails with a validation error on field n'name' error message
When creating or updating an organization VDC network, if you configure the name for the network to begin or end with an empty space, the operation fails with an error message.
validation error on field n'name': string value has invalid format
- You cannot exclude the Description column from the vApp templates grid view
The vApp Templates grid editor does not include the Description element and you cannot exclude it from the grid view.
- The Edit Hard Disks dialog box does not display more than 10 storage policies
The Edit Hard Disks dialog box displays only up to 10 storage policies and you cannot see the full list of storage policies available for this virtual machine.
- In a multisite environment, the New External Network wizard displays the port groups only from the local site
In a multisite environment, if you attempt to add an external network backed by vSphere resources on a remote site, the New External Network wizard displays the port groups only from the local site and you cannot select the port groups from the remote site.
- VMware Cloud Director does not apply the existing distributed firewall rules to a VM residing on a newly added resource pool
In an organization VDC with an enabled distributed firewall capability, if a system administrator adds a new resource pool to the provider VDC, and you create a new VM on this resource pool, VMware Cloud Director does not apply the existing distributed firewall rules to the VM.
- vCenter Server instance becomes disconnected and unavailable for operations and VMware Cloud Director is unable to reconnect to the vCenter Server instance
After upgrading to VMware Cloud Director 10.3, the vCenter Server instance becomes disconnected and VMware Cloud Director is continuously unable to reconnect to the vCenter Server instance. You can see the following error message in the log file:
could not load an entity
Known Issues
- New VMware Cloud Director UI and tasks are slow to load and complete
The Artemis message bus communication is not working and when you trigger operations from the UI, they can take up to 5 minutes to complete or might time out. The performance issues can affect operations such as powering on VMs and vApps, provider VDC creation, vApp deployment, and so on.
The log files might contain an error message, such as:
-
a)
Connection failure to <VCD Cell IP Address> has been detected: AMQ229014: Did not receive data from <something> within the 60,000ms
-
b)
Connection failure to /<VCD Cell IP Address>:61616 has been detected: AMQ219014: Timed out after waiting 30,000 ms
-
c)
Bridge is stopping, will not retry
-
d)
Local Member is not set at on ClusterConnection ClusterConnectionImp
Workaround:
For a) and b):
-
Verify that the VMware Cloud Director cells have network connectivity and can communicate with each other.
-
Restart the VMware Cloud Director cell that contains the error message.
For c) and d), restart the VMware Cloud Director cell that contains the error message.
-
- New The VMware Cloud Director appliance database disk resize script might fail if the backing SCSI disk identifier changes
The database disk resize script runs successfully only if the backing database SCSI disk ID remains the same. If the ID changes for any reason, the script might appear to run successfully but fails. The
/opt/vmware/var/log/vcd/db_diskresize.log
shows that the script fails with aNo such file or directory
error.Workaround:
-
Log in directly or by using an SSH client to the primary cell as root.
-
Run the
lsblk --output NAME,FSTYPE,HCTL
command. -
In the output, find the disk containing the
database_vg-vpostgres
partition and make note of its ID. The ID is under the HCTL column and has the following sample format2:0:3:0
. -
In the
db_diskresize.sh
script, modify the partition ID with the ID from Step 3. For example, if the ID is2:0:3:0
, in lineecho 1 > /sys/class/scsi_device/2\:0\:2\:0/device/rescan
you must change the ID to
2:0:3:0
.echo 1 > /sys/class/scsi_device/2\:0\:3\:0/device/rescan
-
Аfter saving the changes, manually re-invoke the resize script or reboot the appliance.
-
- New Configuring the IP mode for a VM NIC to Static - Manual with IPv4 address results in an error message
For a new or existing VM, if you attempt to configure the IP mode to Static - Manual with an IPv4 address, the validation fails with an error message.
<IP-address> is not a valid IPv6 address.
Workaround: Set the VM's IP mode to Static - IP Pool, assign the IPv4 address, and then switch the IP mode back to Manual.
- New Publishing a vRealize Orchestrator workflow to the VMware Cloud Director service library fails with an error message
When you attempt to publish a vRealize Orchestrator workflow, the operation fails with a
500 Server Error
error message.This happens because the API returns a large number of links for each individual tenant to which the workflow is published and causes an overflow in the HTTP headers.
Workaround: To publish the workflow, use CURL or POSTMAN to run an API request with increased HTTP header size limit.
- New When you use the VMware Cloud Director UI to create a new VM with a placement policy, all virtual machines that are part of the VM group defined in the used placement policy might disappear
When you use the VMware Cloud Director UI to create a new VM that uses a certain placement policy, all virtual machines listed in the VM group that's defined in the used placement policy might disappear from the VM group.
Workaround: When the VMs get deleted from the group, they become non-compliant with the placement policy that you used to create the new VM. To restore the VMs to the group, manually make each of them compliant with the used placement policy.
- New VMware Cloud Director operations, such as powering a VM on and off takes longer time to complete
VMware Cloud Director operations, such as powering a VM on or off takes longer time to complete. The task displays a
Starting virtual machine
status and nothing happens.The jms-expired-messages.logs log file displays an error.
RELIABLE:LargeServerMessage & expiration=
Workaround: None
- New Migrating a VM that is connected to a vSphere-backed external network between resource pools fails
If a VM is connected to an external network which is backed by multiple vSphere networks, and you attempt to migrate the VM between resource pools, the operation fails if the source and destination resource pools are backed by different host clusters and if the destination resource pool does not have access to the external network to which the VM was originally connected.
Workaround: None.
- New You cannot create VMware Cloud Director VDC templates in VMware Cloud Director service environments
VMware Cloud Director service does not support Virtual Data Center (VDC) templates. You can use VDC templates on environments with provider VDCs with an NSX network provider type or an NSX Data Center for vSphere provider type. You cannot use VDC templates on VMware Cloud Director service environments because the provider VDCs have the VMC network provider type.
Workaround: None.
- New Migrating VMs between organization VDCs might fail with an insufficient resource error
If VMware Cloud Director is running with vCenter Server 7.0 Update 3h or earlier, when relocating a VM to a different organization VDC, the VM migration might fail with an insufficient resource error even if the resources are available in the target organization VDC.
Workaround: Upgrade vCenter Server to version 7.0 Update 3i or later.
- New Suspending a VM through the VMware Cloud Director UI results in a partially suspended state of the VM
In the VMware Cloud Director Tenant Portal, when you suspend a VM, VMware Cloud Director does not undeploy the VM, and the VM becomes
Partially Suspended
instead ofSuspended
.Workaround: None.
- New Role name and description are localized in the VMware Cloud Director UI and can cause duplication of role names
The problem occurs because the UI translation does not affect the back end and API. You might create roles with the same names as the translated names which results in perceived duplicate roles in the UI and conflicts with the API usage of role names when creating service accounts.
Workaround: None.
- New The Customer Experience Improvement Program (CEIP) status is Enabled even after deactivating it during the installation of VMware Cloud Director
During the installation of VMware Cloud Director, if you deactivate the option to join the CEIP, after the installation completes, the CEIP status is active.
Workaround: Deactivate the CEIP by following the steps in the Join or Leave the VMware Customer Experience Improvement Program procedure.
- New When you use VMware Cloud Director API version 35.2 or earlier to access a powered off and deployed VM, or a suspended and deployed VM, the power states of the VMs appear as PARTIALLY_POWERED_OFF and PARTIALLY_SUSPENDED, respectively
When you use a version of VMware Cloud Director API version 35.2 or earlier to access a VM that is powered off and deployed or a VM that is suspended and deployed, the power states of the VMs appear as
PARTIALLY_POWERED_OFF
andPARTIALLY_SUSPENDED
, respectively. This happens because of a backward incompatible change in VMware Cloud Director API version 36.0 which introduced these new power states. As a result, API calls from versions 35.2 and earlier that attempt to process these states fail.Workaround: None.
- New VMware Cloud Director appliance upgrade fails with an invalid version error when FIPS mode is enabled
For VMware Cloud Director versions 10.3.x and later, when FIPS mode is enabled, VMware Cloud Director appliance upgrade fails with the following error.
Failure: Installation failed abnormally (program aborted), the current version may be invalid.
Workaround:
-
Before you upgrade the VMware Cloud Director appliance, deactivate FIPS Mode on the cells in the server group and the VMware Cloud Director appliance. See Activate or Deactivate FIPS Mode on the VMware Cloud Director Appliance.
-
Verify that the
/etc/vmware/system_fips
file does not exist on any appliance. -
Upgrade the VMware Cloud Director appliance.
-
Enable FIPS mode again.
-
- New When performing a database upgrade for VMware Cloud Director, the upgrade fails with insert or update on table error
The issue occurs due to stale information in tables associated with a foreign key constraint. Missing data in one of the tables causes a conflict with the foreign key constraint.
Workaround: See VMware knowledge base article 88010.
- New Refreshing the LDAP page in your browser does not take you back to the same page
In the Service Provider Admin Portal, refreshing the LDAP page in your browser takes you to the provider page instead of back to the LDAP page.
Workaround: None.
- New Mounting an NFS datastore from NetApp storage array fails with an error message during the initial VMware Cloud Director appliance configuration
During the initial VMware Cloud Director appliance configuration, if you configure an NFS datastore from NetApp storage array, the operation fails with an error message.
Backend validation of NFS failed with: <nfs-file-path> is owned by an unknown user
Workaround: Configure the VMware Cloud Director appliance by using the VMware Cloud Director Appliance API.
- New If you migrate a VM, vApp, or independent disk to a vCenter Server instance that uses well-signed certificates, the migration fails
When trying to migrate a VM, vApp, or independent disk from one vCenter Server instance to another that uses well-signed certificates, the migration fails. The problem occurs when using the VMware Cloud Director UI and API requests like
recompose
,migrateVms
,moveVApp
, and so on.Workaround: None.
- New VMs become non-compliant after converting a reservation pool VDC into a flex organization VDC
In an organization VDC with a reservation pool allocation model, if some of the VMs have nonzero reservation for CPU and Memory, non-unlimited configuration for CPU and Memory, or both, after converting into a flex organization VDC, these VMs become non-compliant. If you attempt to make the VMs compliant again, the system applies an incorrect policy for the reservation and limit and sets the CPU and Memory reservations to zero and the limits to Unlimited.
Workaround:
- A system administrator must create a VM sizing policy with the correct configuration.
- A system administrator must publish the new VM sizing policy to the converted flex organization VDC.
- The tenants can use the VMware Cloud Director API or the VMware Cloud Director Tenant Portal to assign the VM sizing policy to the existing virtual machines in the flex organization VDC.
- New When you enable FIPS mode, the vRealize Orchestrator integration fails with an error related to invalid parameters.
When you enable FIPS mode, the integration between VMware Cloud Director and vRealize Orchestrator does not work. The VMware Cloud Director UI returns an
Invalid VRO request params
error. The API calls return the following error:Caused by: java.lang.IllegalArgumentException: 'param' arg cannot be null at org.bouncycastle.jcajce.provider.ProvJKS$JKSKeyStoreSpi.engineLoad(Unknown Source) at java.base/java.security.KeyStore.load(KeyStore.java:1513) at com.vmware.vim.install.impl.CertificateGetter.createKeyStore(CertificateGetter.java:128) at com.vmware.vim.install.impl.AdminServiceAccess.(AdminServiceAccess.java:157) at com.vmware.vim.install.impl.AdminServiceAccess.createDiscover(AdminServiceAccess.java:238) at com.vmware.vim.install.impl.RegistrationProviderImpl.(RegistrationProviderImpl.java:56) at com.vmware.vim.install.RegistrationProviderFactory.getRegistrationProvider(RegistrationProviderFactory.java:143) at com.vmware.vcloud.vro.client.connection.STSClient.getRegistrationProvider(STSClient.java:126) ... 136 more
Workaround: None.
- If you use vRealize Orchestrator 8.x, hidden input parameters in workflows are not populated automatically in the VMware Cloud Director UI
If you use vRealize Orchestrator 8.x, when you attempt to run a workflow through the VMware Cloud Director UI, hidden input parameters are not populated automatically in the VMware Cloud Director UI.
Workaround:
To access the values of the workflow input parameters, you must create a vRealize Orchestrator action that has the same input parameter values as the workflow that you want to run.
1. Log in to the vRealize Orchestrator Client and navigate to Library>Workflows.
2. Select the Input Form tab and click Values on the right-hand side.
3. From the Value options drop-down menu, select External source, enter the Action inputs and click Save.
4. Run the workflow in the VMware Cloud Director UI. - The vpostgres process in a standby appliance fails to start
The
vpostgres
process in a standby appliance fails to start and the PostgreSQL log shows an error similar to the following.FATAL: hot standby is not possible because max_worker_processes = 8 is a lower setting than on the master server (its value was 16).
This happens because PostgreSQL requires standby nodes to have the samemax_worker_processes
setting as the primary node. VMware Cloud Director automatically configures themax_worker_processes
setting based on the number of vCPUs assigned to each appliance VM. If the standby appliance has fewer vCPUs than the primary appliance, this results in an error.Workaround: Deploy the primary and standby appliances with the same number of vCPUs.
- VMware Cloud Director API calls to retrieve vCenter Server information return a URL instead of a UUID
The issue occurs with vCenter Server instances that failed the initial registration with VMware Cloud Director version 10.2.1 and earlier. For those vCenter Server instances, when you make API calls to retrieve the vCenter Server information, the VMware Cloud Director API incorrectly returns a URL instead of the expected UUID.
Workaround: Reconnect to the vCenter Server instance to VMware Cloud Director.
- Upgrading from VMware Cloud Director 10.2.x to VMware Cloud Director 10.3 results in an Connection to sfcbd lost error message
If you upgrade from VMware Cloud Director 10.2.x to VMware Cloud Director 10.3, the upgrade operation reports an error message.
Connection to sfcbd lost. Attempting to reconnect
Workaround: You can ignore the error message and continue with the upgrade.
- After Add and Remove a VDC from a VDC group operations, the status of an edge gateway that is shared across all data centers in the VDC group is displayed as Busy
If a VDC is configured with a provider VDC Kubernetes policy, if you add or remove the VDC from a VDC group, on the Edge Gateway page, the status of the edge gateway that is shared across all data centers in the VDC group is displayed as
Busy
and you cannot edit this edge gateway.Workaround:
To add the VDC to the VDC group, you must delete the VDC from the VDC group and add it again.
To remove the VDC from the VDC group, you must add the deleted VDC to the VDC group and delete it again. - When using FIPS mode, trying to upload OpenSSL-generated PKCS8 files fails with an error
OpenSSL cannot generate FIPS-complaint private keys. When VMware Cloud Director is in FIPS mode and you try to upload PKCS8 files generated using OpenSSL, the upload fails with a
Bad request: org.bouncycastle.pkcs.PKCSException: unable to read encrypted data: ... not available: No such algorithm: ...
error orsalt must be at least 128 bits
error.Workaround: Deactivate the FIPS mode to upload the PKCS8 files.
- Creation of Tanzu Kubernetes cluster by using the Kubernetes Container Clusters plug-in fails
When you create a Tanzu Kubernetes cluster by using the Kubernetes Container Clusters plug-in, you must select a Kubernetes version. Some of the versions in the drop-down menu are not compatible with the backing vSphere infrastructure. When you select an incompatible version, the cluster creation fails.
Workaround: Delete the failed cluster record and retry with a compatible Tanzu Kubernetes version. For information on the incompatibilities between Tanzu Kubernetes and vSphere, see Updating the vSphere with Tanzu Environment.
- If you have any subscribed catalogs in your organization, when you upgrade VMware Cloud Director, the catalog synchronization fails
After upgrade, if you have subscribed catalogs in your organization, VMware Cloud Director does not trust the published endpoint certificates automatically. Without trusting the certificates, the content library fails to synchronize.
Workaround: Manually trust the certificates for each catalog subscription. When you edit the catalog subscription settings, a trust on first use (TOFU) dialog prompts you to trust the remote catalog certificate.
If you do not have the necessary rights to trust the certificate, contact your organization administrator. - After upgrading VMware Cloud Director and enabling the Tanzu Kubernetes cluster creation, no automatically generated policy is available and you cannot create or publish a policy
When you upgrade VMware Cloud Director to version 10.3.1 and vCenter Server to version 7.0.0d or later, and you create a provider VDC backed by a Supervisor Cluster, VMware Cloud Director displays a Kubernetes icon next to the VDC. However, there is no automatically generated Kubernetes policy in the new provider VDC. When you try to create or publish a Kubernetes policy to an organization VDC, no machine classes are available.
Workaround: Manually trust the corresponding Kubernetes endpoint certificates. See VMware knowledge base article 83583.
- Entering a Kubernetes cluster name with non-Latin characters deactivates the Next button in the Create New Cluster wizard
The Kubernetes Container Clusters plug-in supports only Latin characters. If you enter non-Latin characters, the following error appears.
Name must start with a letter and only contain alphanumeric or hyphen (-) characters. (Max 128 characters
).Workaround: None.
- NFS downtime can cause VMware Cloud Director appliance cluster functionalities to malfunction
If the NFS is unavailable due to the NFS share being full, becoming read only, and so on, can cause appliance cluster functionalities to malfunction. HTML5 UI is unresponsive while the NFS is down or cannot be reached. Other functionalities that might be affected are the fencing out of a failed primary cell, switchover, promoting a standby cell, and so on. For more information about setting up correctly the NFS shared storage, see Preparing the Transfer Server Storage for the VMware Cloud Director Appliance.
Workaround:
- Fix the NFS state so that it is not
read-only
. - Clean up the NFS share if it is full.
- Fix the NFS state so that it is not
- Trying to encrypt named disks in vCenter Server version 6.5 or earlier fails with an error
For vCenter Server instances version 6.5 or earlier, if you try to associate new or existing named disks with an encryption enabled policy, the operation fails with a
Named disk encryption is not supported in this version of vCenter Server.
error.Workaround: None.
- A fast-provisioned virtual machine created on a VMware vSphere Storage APIs Array Integration (VAAI) enabled NFS array, or vSphere Virtual Volumes (VVols) cannot be consolidated
In-place consolidation of a fast provisioned virtual machine is not supported when a native snapshot is used. Native snapshots are always used by VAAI-enabled datastores, as well as by VVols. When a fast-provisioned virtual machine is deployed to one of these storage containers, that virtual machine cannot be consolidated .
Workaround: Do not enable fast provisioning for an organization VDC that uses VAAI-enabled NFS or VVols. To consolidate a virtual machine with a snapshot on a VAAI or a VVol datastore, relocate the virtual machine to a different storage container.
- If you add an IPv6 NIC to a VM and then you add an IPv4 NIC to the same VM, the IPv4 north-south traffic breaks
Using the HTML5 UI, if you add an IPv6 NIC first or configure an IPv6 NIC as the primary NIC in a VM, and then you add an IPv4 NIC to the same VM, the IPv4 north-south communication breaks.
Workaround: First you must add the IPv4 NIC to the VM and then the IPv6 NIC.
- When you use the VMware Cloud Director API to create a VM from a template and you don't specify a default storage policy, if there is no default storage policy set for the template, the newly created VM attempts to use the storage policy of the source template itself
When you use the VMware Cloud Director API to create a VM from a template and you don't specify a default storage policy, if there is no default storage policy set for the template, the newly created VM attempts to use the storage policy of the source template itself instead of using the storage policy of the organization VDC in which you are deploying it.
Workaround: None.
- Auto scaling fails with an Operation denied error message
An auto scaling operation fails with an
Operation denied
error message. This happens because the API token that the auto scaling service uses to authenticate against the VMware Cloud Director API expires and the rule that triggers the growing or shrinking of scale groups stops working.Workaround: Restart all VMware Cloud Director cells.
- New A VM with IP mode set to DHCP might not be able to connect to an external network
If a VM with IP mode set to DHCP is connected to a vApp network that uses port forwarding, the VM cannot connect to an external network. This happens because in NSX-backed organization VDCs, enabling IP masquerading for a vApp network does not create a corresponding SNAT rule on the vApp edge in NSX to allow outbound access for a VM without a static IP.
Workaround: Add to the vApp network a second vApp with a static IP and an explicit DNAT rule that allows access to the external network to the vApp network.