VMware Cloud Director 10.3.3.2 | 07 JUL 2022 | Build 20030439 (installed build 20027910) Check for additions and updates to these release notes. |
VMware Cloud Director 10.3.3.2 | 07 JUL 2022 | Build 20030439 (installed build 20027910) Check for additions and updates to these release notes. |
The VMware Cloud Director 10.3.3.2 release provides bug fixes, updates the VMware Cloud Director appliance base OS and the VMware Cloud Director open-source components.
For information about system requirements and installation instructions, see VMware Cloud Director 10.3 Release Notes.
To access the full set of product documentation, go to VMware Cloud Director Documentation.
Using the VMware Cloud DIrector API 36.2 or later to power off or discard the suspended state of a VM also results in undeploying the VM
In VMware Cloud Director API 36.2 and later versions, when you make the following API requests, this results in undeploying the VM besides powering it off or discarding its suspended state.
POST /vApp/{vm-id}/action/powerOff
POST /vApp/{vm-id}/action/discardSuspendedState
This change creates a backward incompatibility with API versions 36.1 and 36.0, in which these API calls result only in powering off or discarding the suspended state of the VM, respectively. This issue is resolved in this release - if you are using an API client version 36.1 or 36.0, the API request results only in powering off or discarding the suspended state of the VM, respectively.
When you use VMware Cloud Director API version 35.2 or ealier to access a powered off and deployed VM, or a suspended and deployed VM, the power states of the VMs appear as PARTIALLY_POWERED_OFF
and PARTIALLY_SUSPENDED
, respectively
When you use a version of the VMware Cloud Director API earlier than 36.0 to access a VM that is powered off and deployed or a VM that is suspended and deployed, the power states of the VMs appear as PARTIALLY_POWERED_OFF
and PARTIALLY_SUSPENDED
, respectively. This happens because of a backward incompatible change in VMware Cloud Director API version 36.0 which introduced these new power states. As a result, API calls from versions 35.2 and earlier that attempt to process these states fail. This issue is resolved in this release. If you are using an API client version earlier than 36.0, the states of the VMs appear as POWERED_OFF
and SUSPENDED
, respectively.
VMware Cloud Director UI does not display information about an organization change event in the list of all events
After you modify an organization name and description, VMware Cloud Director does not report this event. As a result, the Events screen does not display the organization modify event in the list of all events.
Upgrading from VMware Cloud Director 10.1.x and 10.2.x to version 10.3.3 or 10.3.3.1 fails with an error message
If you upgrade from VMware Cloud Director 10.1.x and 10.2.x to version 10.3.3 or 10.3.3.1, the process fails with an error message.
<JAVA_HOME>/lib/ext exists, extensions mechanism no longer supported; Use -classpath instead..Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. ERROR: there was a problem converting the existing keystores in
VMware Cloud Director does not display the actual number of VMs in a scale group
If you edit an existing rule or you add a new one while a rule is triggering the growing or shrinking of a scale group, the auto scaling algorithm does not work as configured or completely stops working. As a result, the number of listed VMs in the scale group are inconsistent with the actual number of VMs that reside in the VDC.
The Create PVDC Kubernetes Policy wizard does not display the full list of available storage policies
The Storage Policy page of the Create PVDC Kubernetes Policy wizard allows displaying only of the first page with available storage policies and you cannot navigate to the next pages.
Subscribing to an external catalog fails with an Unable to establish connection: Connection timed out
error message
If a cell connects to the Internet by using a proxy, when you attempt to subscribe to an external catalog, the operation fails with an error message.
Unable to establish connection: Connection timed out
Powering on a vApp in an elastic organization VDC fails with a The operation could not be performed, because there are insufficient memory resources
error message
In an elastic organization VDC, powering on a vApp fails with a The operation could not be performed, because there are insufficient memory resources
error message even if the organization VDC has sufficient resources.
This happens because the cluster, on which the vApp resides, does not have resources to power on the vApp.
An attempt to update an existing firewall rule on an edge gateway results in an error message
If you configure an average of 1000 and more application port profiles for an NSX-T Data Center edge gateway, an attempt to update the firewall rules for this edge gateway fails with an error message.
Cannot update firewall for Edge Gateway as it contains Application Port Profiles that are not present or invalid.
An attempt to display the CSE cluster list fails with an error message
As a non-administrator user, when you attempt to display the CSE cluster list by running the vcd cse cluster list
command, the operation fails with an error message.
/opt/vmware/cse/python/lib/python3.7/site-packages/dataclasses_json/core.py:171: RuntimeWarning: `NoneType` object value of non-optional type description detected when decoding DefEntityType. warnings.warn(f"`NoneType` object {warning}.", RuntimeWarning)
Selecting an organization from the list of results in the VMware Cloud Director quick search results in an Entity xxxxx doesn't exist
error
When using the VMware Cloud Director quick search to find an organization, on the list of results, clicking on the organization name results in an error message.
Entity xxxxx doesn't exist
Regenerating a MAC address for Multisite stretched network fails with an error message
When you attempt to regenerate a MAC address for a Multisite stretched network by using the cell management tool, the operation fails with an error message.
Error executing command: Use of @OneToMany or @ManyToMany targeting an unmapped class:
Adding and removing a provider VDC to an existing VM placement policy fails with a java.lang.NullPointerException
error messge
If an existing VM placement policy is configured with multiple provider VDCs, attempting to add a new provider VDC or removing an existing one results in an error message.
java.lang.NullPointerException
New - VMware Cloud Director UI and tasks are slow to load and complete
The Artemis message bus communication is not working and when you trigger operations from the UI, they can take up to 5 minutes to complete or might time out. The performance issues can affect operations such as powering on VMs and vApps, provider VDC creation, vApp deployment, and so on.
The log files might contain an error message, such as:
a) Connection failure to <VCD Cell IP Address> has been detected: AMQ229014: Did not receive data from <something> within the 60,000ms
b) Connection failure to /<VCD Cell IP Address>:61616 has been detected: AMQ219014: Timed out after waiting 30,000 ms
c) Bridge is stopping, will not retry
d) Local Member is not set at on ClusterConnection ClusterConnectionImp
Workaround:
For a) and b):
Verify that the VMware Cloud Director cells have network connectivity and can communicate with each other.
Restart the VMware Cloud Director cell that contains the error message.
For c) and d), restart the VMware Cloud Director cell that contains the error message.
New - The VMware Cloud Director appliance database disk resize script might fail if the backing SCSI disk identifier changes
The database disk resize script runs successfully only if the backing database SCSI disk ID remains the same. If the ID changes for any reason, the script might appear to run successfully but fails. The /opt/vmware/var/log/vcd/db_diskresize.log
shows that the script fails with a No such file or directory
error.
Workaround:
Log in directly or by using an SSH client to the primary cell as root.
Run the lsblk --output NAME,FSTYPE,HCTL
command.
In the output, find the disk containing the database_vg-vpostgres
partition and make note of its ID. The ID is under the HCTL column and has the following sample format 2:0:3:0
.
In the db_diskresize.sh
script, modify the partition ID with the ID from Step 3. For example, if the ID is 2:0:3:0
, in line
echo 1 > /sys/class/scsi_device/2\:0\:2\:0/device/rescan
you must change the ID to 2:0:3:0
.
echo 1 > /sys/class/scsi_device/2\:0\:3\:0/device/rescan
Аfter saving the changes, manually re-invoke the resize script or reboot the appliance.
New - Publishing a vRealize Orchestrator workflow to the VMware Cloud Director service library fails with an error message
When you attempt to publish a vRealize Orchestrator workflow, the operation fails with a 500 Server Error
error message.
This happens because the API returns a large number of links for each individual tenant to which the workflow is published and causes an overflow in the HTTP headers.
Workaround: To publish the workflow, use CURL
or POSTMAN
to run an API request with increased HTTP header size limit.
New - When you use the VMware Cloud Director UI to create a new VM with a placement policy, all virtual machines that are part of the VM group defined in the used placement policy might disappear
When you use the VMware Cloud Director UI to create a new VM that uses a certain placement policy, all virtual machines listed in the VM group that's defined in the used placement policy might disappear from the VM group.
Workaround: When the VMs get deleted from the group, they become non-compliant with the placement policy that you used to create the new VM. To restore the VMs to the group, manually make each of them compliant with the used placement policy.
New - VMware Cloud Director operations, such as powering a VM on and off takes longer time to complete
VMware Cloud Director operations, such as powering a VM on or off takes longer time to complete. The task displays a Starting virtual machine
status and nothing happens.
The jms-expired-messages.logs
log file displays an error.
RELIABLE:LargeServerMessage & expiration=
Workaround: None.
New - NSX context profiles disappear after synchronization between VMware Cloud Director and NSX
If a large number of NSX context profiles are created for an edge gateway in VMware Cloud Director, sometimes the NSX context profiles don't sync correctly between VMware Cloud Director and NSX. As a result, some context profiles might get deleted. For information on NSX context profiles in VMware Cloud Director, see Create Custom Application Port Profiles.
New - Attempting to create a new application port profile fails with a duplicate error message
If two organization VDCs that are part of the same organization are backed by different NSX Manager instances, attempting to create an application port profile with the same name on both edge gateways might fail with the error message Application Port Profile already exists in Organization
.
New - A VM with IP mode set to DHCP might not be able to connect to an external network
If a VM with IP mode set to DHCP is connected to a vApp network that uses port forwarding, the VM cannot connect to an external network. This happens because in NSX-backed organization VDCs, enabling IP masquerading for a vApp network does not create a corresponding SNAT rule on the vApp edge in NSX to allow outbound access for a VM without a static IP.
Workaround: Add to the vApp network a second vApp with a static IP and an explicit DNAT rule that allows access to the external network to the vApp network.
New - When you move a vApp to another organization VDC, the vApp description is lost
When you move a vApp from one organization VDC to another, the description for the vApp is not preserved.
New - Migrating a VM that is connected to a vSphere-backed external network between resource pools fails
If a VM is connected to an external network which is backed by multiple vSphere networks, and you attempt to migrate the VM between resource pools, the operation fails if the source and destination resource pools are backed by different host clusters and if the destination resource pool does not have access to the external network to which the VM was originally connected.
Workaround: None.
Guest OS customizations like hostname and network do not work for the AlmaLinux OS
If you deploy an AlmaLinux template, VMware Cloud Director ignores the hostname and network configurations even when you force the guest customizations.
Workaround: Edit the /etc/redhat-release
file to replace AlmaLinux
with CentOS Linux
.
New - Viewing the load balancer pools for an NSX edge gateway in the VMware Cloud Director UI might fail with a duplicate key error
Viewing the load balancer pools for an NSX edge gateways that uses NSX Advanced Load Balancing fails with Duplicate key
error. This happens because of a failed automated creation of load balancing server pools with pool_backing_id
set to null
that are not used for any virtual services.
Workaround: Manually remove the server pools with pool_backing_id
set to null
from the VMware Cloud Director database.
New - Active Directory users cannot use the cell management tool on the VMware Cloud Director appliance
Active Directory users cannot run the cell management tool on the VMware Cloud Director appliance. The cell-management-tool.log
file contains the following exception.
Unable to connect to the cell: Invalid credentials. Exiting. | java.lang.SecurityException: Invalid credentials at com.vmware.vcloud.common.jmx.VCloudJMXAuthenticator.authenticate
Workaround: None.
New - Moving a VM to a different provider VDC fails with an Internal Server Error
message
If two provider VDCs are backed by different vCenter Server instances and you configure different names for their storage profiles, moving a VM between the provider VDCs fails with the following error.
Internal Server Error
Workaround: None.
New - Running a custom workflow with external validation in vRO Workflow Execution UI plug-in fails with an Error performing external validation
error message
When running a custom workflow with external validation through the vRO Workflow Execution UI plug-in, the process fails with an Error performing external validation
error message. The issue occurs because vRealize Orchestrator does not perform validation on the inputs in the custom form in VMware Cloud Foundation.
Workaround: None.
New - The VMware Cloud Director dashboards for flex organization VDCs show incorrect CPU use
For flex organization VDCs, if the Make Allocation pool Org VDCs elastic option is activated, the flex organization VDC dashboard displays incorrect information about the vCPU use. For example, in a flex organization VDC with default vCPU speed of 1 GHz where the sizing policy defines the vCPU speed to 2GHz, if you create a VM, the dashboard incorrectly shows the vCPU use as 1 GHz. In the flex allocation model, the VM compute resource allocation depends on the VM sizing policies, and the real vCPU speed is 2GHz. When the Make Allocation pool Org VDCs elastic option is deactivated, the metrics appear correctly.
Workaround: None.
New - You cannot create VMware Cloud Director VDC templates in VMware Cloud Director service environments
VMware Cloud Director service does not support Virtual Data Center (VDC) templates. You can use VDC templates on environments with provider VDCs with an NSX network provider type or an NSX Data Center for vSphere provider type. You cannot use VDC templates on VMware Cloud Director service environments because the provider VDCs have the VMC network provider type.
Workaround: None.
New - VMs become non-compliant after converting a reservation pool VDC into a flex organization VDC
In an organization VDC with a reservation pool allocation model, if some of the VMs have nonzero reservation for CPU and Memory, non-unlimited configuration for CPU and Memory, or both, after converting into a flex organization VDC, these VMs become non-compliant. If you attempt to make the VMs compliant again, the system applies an incorrect policy for the reservation and limit and sets the CPU and Memory reservations to zero and the limits to Unlimited.
Workaround:
A system administrator must create a VM sizing policy with the correct configuration.
A system administrator must publish the new VM sizing policy to the converted flex organization VDC.
The tenants can use the VMware Cloud Director API or the VMware Cloud Director Tenant Portal to assign the VM sizing policy to the existing virtual machines in the flex organization VDC.
New - Migrating VMs between organization VDCs might fail with an insufficient resource error
If VMware Cloud Director is running with vCenter Server 7.0 Update 3h or earlier, when relocating a VM to a different organization VDC, the VM migration might fail with an insufficient resource error even if the resources are available in the target organization VDC.
Workaround: Upgrade vCenter Server to version 7.0 Update 3i or later.
New - Suspending a VM through the VMware Cloud Director UI results in a partially suspended state of the VM
In the VMware Cloud Director Tenant Portal, when you suspend a VM, VMware Cloud Director does not undeploy the VM, and the VM becomes Partially Suspended
instead of Suspended
.
Workaround: None.
New - Role name and description are localized in the VMware Cloud Director UI and can cause duplication of role names
The problem occurs because the UI translation does not affect the back end and API. You might create roles with the same names as the translated names which results in perceived duplicate roles in the UI and conflicts with the API usage of role names when creating service accounts.
Workaround: None.
New - The Customer Experience Improvement Program (CEIP) status is Enabled
even after deactivating it during the installation of VMware Cloud Director
During the installation of VMware Cloud Director, if you deactivate the option to join the CEIP, after the installation completes, the CEIP status is active.
Workaround: Deactivate the CEIP by following the steps in the Join or Leave the VMware Customer Experience Improvement Program procedure.
New - When starting the VMware Cloud Director appliance, the message [FAILED] Failed to start Wait for Network to be Configured. See 'systemctl status systemd-networkd-wait-online.service' for details
appears.
The message appears incorrectly and does not indicate an actual problem with the network. You can disregard the message and continue to use the VMware Cloud Director appliance as usual.
Workaround: None.
New - After VMware Cloud Director upgrade from version 10.2.2.x, adding a new node to the cluster fails with a file could not be found
error
Upgrading VMware Cloud Director changes the location of the certificates.ks
file which causes adding new nodes to fail with the following error.
Error: file could not be found: /opt/vmware/vcloud-director/user.http.pem
Error: invalid input: No valid HTTP SSL certificate provided
Workaround: Update the/opt/vmware/appliance/bin/setupvcd.sh
script with the following commands.
#$VCLOUD_HOME/bin/configure --unattended-installation --primary-ip $ip --console-proxy-ip $ip --console-proxy-port-https 8443 -r $responsefile$VCLOUD_HOME/bin/configure --unattended-installation --primary-ip $ip --console-proxy-ip $ip --console-proxy-port-https 8443 --cert /opt/vmware/vcloud-director/etc/user.http.pem --key /opt/vmware/vcloud-director/etc/user.http.key --consoleproxy-cert /opt/vmware/vcloud-director/etc/user.consoleproxy.pem --consoleproxy-key /opt/vmware/vcloud-director/etc/user.consoleproxy.key -r $responsefile
VMware Cloud Director appliance upgrade fails with an invalid version error when FIPS mode is enabled
For VMware Cloud Director versions 10.3.x and later, when FIPS mode is enabled, VMware Cloud Director appliance upgrade fails with the following error.
Failure: Installation failed abnormally (program aborted), the current version may be invalid.
Workaround:
Before you upgrade the VMware Cloud Director appliance, deactivate FIPS Mode on the cells in the server group and the VMware Cloud Director appliance. See Activate or Deactivate FIPS Mode on the VMware Cloud Director Appliance.
Verify that the /etc/vmware/system_fips
file does not exist on any appliance.
Upgrade the VMware Cloud Director appliance.
Enable FIPS mode again.
After upgrade, configuring an additional cell in the existing server group fails with a validation error
When you upgrade to version 10.3.3.1 and later, VMware Cloud Director modifies the file permissions for the NFS mount directory to 770. The directory permission must be 750, and as a result, the deployment of new cells fails with the following error.
Backend validation of NFS mount failed with: Unexpected ownership and/or permissions on provided NFS share. Expected: vcloud:vcloud with mode: 750. Found: vcloud:vcloud with mode 770
Workaround: After you upgrade all cells, log in to any cell, and change the file permission for the mounted NFS folder to 750 using the following command.
# chmod 750 /opt/vmware/vcloud-director/data/transfer
See Preparing the Transfer Server Storage for the VMware Cloud Director Appliance.
If you upgrade an appliance after changing the permission, you must change the permission to 750 again.
Refreshing the LDAP page in your browser does not take you back to the same page
In the Service Provider Admin Portal, refreshing the LDAP page in your browser takes you to the provider page instead of back to the LDAP page.
Workaround: None.
An attempt to migrate tenant storage fails with an Internal Server Error
error message
In the HTML5 UI, using the Migrate Tenant Storage option to migrate all the items stored on a datastore to other datastores in an SDRS Clusterfails to migrate the VMs with an errors message.
Internal Server ErrorCaused by: java.lang.RuntimeException: The operation failed because no suitable resource was found. Out of x candidate hubs:x hubs eliminated because: No valid storage containers found for VirtualMachine "{vm-uuid}". All x available storage containers were filtered out as being invalid.
Workaround: See the https://kb.vmware.com/s/article/88703?lang=en_US KB.
Mounting an NFS datastore from NetApp storage array fails with an error message during the initial VMware Cloud Director appliance configuration
During the initial VMware Cloud Director appliance configuration, if you configure an NFS datastore from NetApp storage array, the operation fails with an error message.
Backend validation of NFS failed with: is owned by an unknown user
Workaround: Configure the VMware Cloud Director appliance by using the VMware Cloud Director Appliance API.
The synchronization of a subscribed catalog times out while synchronizing large vApp templates
If an external catalog contains large vApp templates, synchronizing the subscribed catalog with the external catalog times out.This happens when the timeout setting is set to its default value of five minutes.
Workaround: Using the manage-config
subcommand of the cell management tool, update the timeout configuration setting.
./cell-management-tool manage-config -n transfer.endpoint.socket.timeout -v [timeout-value]
After upgrade to VMware Cloud Director 10.3.2a, opening the list of external networks results in a warning message
When trying to open the list of external networks, the VMware Cloud Director UI displays a warning message.
One or more external networks or T0 Gateways have been disconnected from its IP address data.
This happens because the external network gets disconnected from the Classless Inter-Domain Routing (CIDR) configuration before the upgrade to VMware Cloud Director 10.3.2a.
Workaround: Contact VMware Global Support Services (GSS) for assistance with the workaround for this issue.
In an IP prefix list, configuring any
as the Network value results in an error message
When creating an IP prefix list, if want to deny or accept any route and you configure the Network value as any
, the dialog box displays an error message.
"any" is not a valid CIDR notation. A valid CIDR is a valid IP address followed by a slash and a number between 0 and 32 or 64, depending on the IP version.
Workaround: Leave the Network text box blank.
If you use vRealize Orchestrator 8.x, hidden input parameters in workflows are not populated automatically in the VMware Cloud Director UI
If you use vRealize Orchestrator 8.x, when you attempt to run a workflow through the VMware Cloud Director UI, hidden input parameters are not populated automatically in the VMware Cloud Director UI.
Workaround:To access the values of the workflow input parameters, you must create a vRealize Orchestrator action that has the same input parameter values as the workflow that you want to run.
Log in to the vRealize Orchestrator Client and navigate to Library>Workflows.
Select the Input Form tab and click Values on the right-hand side.
From the Value options drop-down menu, select External source,enter the Action inputs and click Save.
Run the workflow in the VMware Cloud Director UI.
The vpostgres process in a standby appliance fails to start
The vpostgres
process in a standby appliance fails to start and the PostgreSQL log shows an error similar to the following. FATAL: hot standby is not possible because max_worker_processes = 8 is a lower setting than on the master server (its value was 16).
This happens because PostgreSQL requires standby nodes to have the same max_worker_processes
setting as the primary node. VMware Cloud Director automatically configures the max_worker_processes
setting based on the number of vCPUs assigned to each appliance VM. If the standby appliance has fewer vCPUs than the primary appliance, this results in an error.
Workaround: Deploy the primary and standby appliances with the same number of vCPUs.
VMware Cloud Director API calls to retrieve vCenter Server information return a URL instead of a UUID
The issue occurs with vCenter Server instances that failed the initial registration with VMware Cloud Director version 10.2.1 and earlier. For those vCenter Server instances, when you make API calls to retrieve the vCenter Server information, the VMware Cloud Director API incorrectly returns a URL instead of the expected UUID.
Workaround: Reconnect to the vCenter Server instance to VMware Cloud Director.
Upgrading from VMware Cloud Director 10.2.x to VMware Cloud Director 10.3.x results in an Connection to sfcbd lost
error message
If you upgrade from VMware Cloud Director 10.2.x to VMware Cloud Director 10.3, the upgrade operation reports an error message.
Connection to sfcbd lost. Attempting to reconnect
Workaround: You can ignore the error message and continue with the upgrade.
When using FIPS mode, trying to upload OpenSSL-generated PKCS8 files fails with an error
OpenSSL cannot generate FIPS-complaint private keys. When VMware Cloud Director is in FIPS mode and you try to upload PKCS8 files generated using OpenSSL, the upload fails with a Bad request: org.bouncycastle.pkcs.PKCSException: unable to read encrypted data: ... not available: No such algorithm: ...
error or salt must be at least 128 bits
error.
Workaround: Deactivate the FIPS mode to upload the PKCS8 files.
Creation of Tanzu Kubernetes cluster by using the Kubernetes Container Clusters plug-in fails
When you create a Tanzu Kubernetes cluster by using the Kubernetes Container Clusters plug-in, you must select a Kubernetes version. Some of the versions in the drop-down menu are not compatible with the backing vSphere infrastructure. When you select an incompatible version, the cluster creation fails.
Workaround: Delete the failed cluster record and retry with a compatible Tanzu Kubernetes version. For information on the incompatibilities between Tanzu Kubernetes and vSphere, see Updating the vSphere with Tanzu Environment.
If you have any subscribed catalogs in your organization, when you upgrade VMware Cloud Director, the catalog synchronization fails
After upgrade, if you have subscribed catalogs in your organization, VMware Cloud Director does not trust the published endpoint certificates automatically. Without trusting the certificates, the content library fails to synchronize.
Workaround: Manually trust the certificates for each catalog subscription. When you edit the catalog subscription settings, a trust on first use (TOFU) dialog prompts you to trust the remote catalog certificate.
If you do not have the necessary rights to trust the certificate, contact your organization administrator.
After upgrading VMware Cloud Director and enabling the Tanzu Kubernetes cluster creation, no automatically generated policy is available and you cannot create or publish a policy
When you upgrade VMware Cloud Director to version 10.3.1 and vCenter Server to version 7.0.0d or later, and you create a provider VDC backed by a Supervisor Cluster, VMware Cloud Director displays a Kubernetes icon next to the VDC. However, there is no automatically generated Kubernetes policy in the new provider VDC. When you try to create or publish a Kubernetes policy to an organization VDC, no machine classes are available.
Workaround: Manually trust the corresponding Kubernetes endpoint certificates. See VMware knowledge base article 83583.
Entering a Kubernetes cluster name with non-Latin characters deactivates the Next button in the Create New Cluster wizard
The Kubernetes Container Clusters plug-in supports only Latin characters. If you enter non-Latin characters, the following error appears.
Name must start with a letter and only contain alphanumeric or hyphen (-) characters. (Max 128 characters).
Workaround: None.
NFS downtime can cause VMware Cloud Director appliance cluster functionalities to malfunction
If the NFS is unavailable due to the NFS share being full, becoming read only, and so on, can cause appliance cluster functionalities to malfunction. HTML5 UI is unresponsive while the NFS is down or cannot be reached. Other functionalities that might be affected are the fencing out of a failed primary cell, switchover, promoting a standby cell, and so on. For more information about setting up correctly the NFS shared storage, see Preparing the Transfer Server Storage for the VMware Cloud Director Appliance.
Workaround:
Fix the NFS state so that it is not read-only
.
Clean up the NFS share if it is full.
Trying to encrypt named disks in vCenter Server version 6.5 or earlier fails with an error
For vCenter Server instances version 6.5 or earlier, if you try to associate new or existing named disks with an encryption enabled policy, the operation fails with a Named disk encryption is not supported in this version of vCenter Server
. error.
Workaround: None.
A fast-provisioned virtual machine created on a VMware vSphere Storage APIs Array Integration (VAAI) enabled NFS array, or vSphere Virtual Volumes (VVols) cannot be consolidated
In-place consolidation of a fast provisioned virtual machine is not supported when a native snapshot is used. Native snapshots are always used by VAAI-enabled datastores, as well as by VVols. When a fast-provisioned virtual machine is deployed to one of these storage containers, that virtual machine cannot be consolidated .
Workaround: Do not enable fast provisioning for an organization VDC that uses VAAI-enabled NFS or VVols. To consolidate a virtual machine with a snapshot on a VAAI or a VVol datastore, relocate the virtual machine to a different storage container.
If you add an IPv6 NIC to a VM and then you add an IPv4 NIC to the same VM, the IPv4 north-south traffic breaks
Using the HTML5 UI, if you add an IPv6 NIC first or configure an IPv6 NIC as the primary NIC in a VM, and then you add an IPv4 NIC to the same VM, the IPv4 north-south communication breaks.
Workaround: First you must add the IPv4 NIC to the VM and then the IPv6 NIC.