VMware Cloud Director Container Service Extension 4.0.1 | 19 JAN 2023 Check for additions and updates to these release notes. |
VMware Cloud Director Container Service Extension 4.0.1 | 19 JAN 2023 Check for additions and updates to these release notes. |
New - VMware Cloud Director Container Service Extension Server 4.0.1
Service providers can now upgrade the VMware Cloud Director Container Service Extension Server from 4.0.0 to 4.0.1 through the CSE Management tab in Kubernetes Container Clusters UI plug-in of VMware Cloud Director.
For instructions on how to upgrade the VMware Cloud Director Container Service Extension Server from 4.0.0 to 4.0.1, see Update the VMware Cloud Director Container Service Extension Server.
You can download VMware Cloud Director Container Service Extension Server 4.0.1 from the VMware Cloud Director Container Service Extension Downloads page.
New - Kubernetes Container Clusters UI Plug-in 4.0.1 for VMware Cloud Director
A new version of Kubernetes Container Clusters UI plug-in is now available to use with VMware Cloud Director.
You can upgrade the Kubernetes Container Clusters UI plug-in before or after you upgrade the VMware Cloud Director Container Service Extension server.
The following steps outline how to upgrade the Kubernetes Container Clusters UI plug-in from 4.0.0 to 4.0.1:
Download the Kubernetes Container Clusters UI plug-in 4.0.1 from the VMware Cloud Director Container Service Extension Downloads page.
In the VMware Cloud Director Portal, from the top navigation bar, select More > Customize Portal.
Select the check box next to Kubernetes Container Clusters UI plug-in 4.0, and click Disable.
Click Upload > Select plugin file, and upload the Kubernetes Container Clusters UI plug-in 4.0.1 file.
Refresh the browser to start using the new plug-in.
For more information, refer to Managing Plug-Ins.
VMware Cloud Director Container Service Extension 4.0.1 additional interoperability
To view the interoperability of VMware Cloud Director Container Service Extension 4.0.1 and previous versions with VMware Cloud Director, and additional product interoperability, refer to the Product Interoperability Matrix.
The following table displays the interoperability between VMware Cloud Director Container Service Extension 4.0.1 and Kubernetes resources.
Kubernetes Resources |
Supported Versions |
Documentation |
---|---|---|
Kubernetes External Cloud Provider for VMware Cloud Director |
1.3.0, 1.2.0 |
|
Container Storage Interface (CSI) driver for VMware Cloud Director Named Independent Disks |
1.3.0 and patch versions |
|
Kubernetes Cluster API Provider Cloud Director |
1.0.0 |
https://github.com/vmware/cluster-api-provider-cloud-director |
Service providers can update Kubernetes resources through the following workflow:
In VMware Cloud Director UI, from the top navigation bar, select More > Kubernetes Container Clusters.
In Kubernetes Container Clusters UI plug-in 4.0.0/4.0.1, select CSE Management > Server Details > Update Server.
In the Update CSE Server window, in the Current CSE Server Components section, update the Kubernetes resources configuration.
Click Submit Changes.
For more information, see Update the VMware Cloud Director Container Service Extension Server.
When there are two clusters of the same name in the same organization, and you attempt to delete one of the clusters, it results in the vApp of the other cluster with the same name being deleted.
Resolution: This bug is fixed in VMware Cloud Director Container Service Extension 4.0.1.
VMware Cloud Director Container Service Extension 4.0 cluster deployment fails with manual IP for Control Plane IP.
If a user inputs a value for Control Plane IP or Virtual IP Subnet, and then deletes the value from the input field, the cluster creation fails due to the UI sending an empty string instead of a null value.
Resolution: This bug is fixed in VMware Cloud Director Container Service Extension 4.0.1.
Update default NO_PROXY to use cluster.local
Current default settings for NO_PROXY contain k8s.test. The default should be cluster.local instead to match what Tanzu Kubernetes Grid clusters use.
Resolution: In Kubernetes Container Clusters 4.0.1 UI plug-in, the default value list now includes cluster.local, instead of k8s.test.
In Kubernetes Container Clusters UI, the cluster information page shows Fetching Upgrades while cluster is in pending state.
If you submit a Tanzu Kubernetes Grid cluster creation using Kubernetes Container Clusters UI plug-in, and the VMware Cloud Director Container Service Extension server has not yet started creating the cluster, the cluster's status is Pending. However, when you visit the cluster information page in the Kubernetes Container Clusters UI plug-in, you can see a spinner, and Fetching Upgrades for the upgrade availability value. This is incorrect as the upgrade availability value should be a hyphen.
Resolution: In the Kubernetes Container Clusters UI plug-in 4.0.1, this is fixed so that when the cluster's status is pending, the upgrade availability value is a hyphen.
In the Tanzu Kubernetes Grid cluster creation window of the Kubernetes Container Cluster UI plug-in, the Input validation help message does not appear when the control plane number of nodes is 0.
In the Tanzu Kubernetes Grid cluster creation wizard, if a user inputs 0 for control plane or worker number of nodes, the input validation help message does not appear to indicate that 0 is an invalid value.
Resolution: The validation help message now appears in Kubernetes Container Cluster UI plug-in 4.0.1.
In Kubernetes Container Clusters UI plug-in, the CSE Management workflow in a multi-site VMware Cloud Director setup only allows for a single server config entity. This results in CSE Management workflows failing in multi-site environments.
Resolution: The CSE Management workflow in Kubernetes Container Clusters UI plug-in now only fetches the server config entity that belongs to the site of the user that is currently logged in. Each site in a multi-site environment can now create, and maintain its own server config entity.
VMware Cloud Director Container Service Extension 4.0 fails to create a Tanzu Kubernetes Grid cluster when a proxy server is configured for downloading the binaries and images from external repositories.
Error message logged: Could not reach archive.ubuntu.com
.
Resolution: Proxy activation for Ubuntu package updates are present when a proxy is configured.
Misleading and false error log statements in VMware Cloud Director Container Service Extension 4.0.
Resolution: These redundant log statements are now rectified to present more relevant log messages.
Users may encounter authorization errors when executing cluster operations in Kubernetes Container Clusters UI plug-in if a Legacy Rights Bundle exists for their organization.
After you upgrade VMware Cloud Director from version 9.1 or earlier, the system may create a Legacy Rights Bundle for each organization. This Legacy Rights Bundle includes the rights that are available in the associated organization at the time of the upgrade and is published only to this organization. To begin using the rights bundles model for an existing organization, you must delete the corresponding Legacy Rights Bundle. For more information, see Managing Rights and Roles.
In the Administration tab in the service provider portal, you can delete Legacy Rights Bundles. For more information, see Delete a Rights Bundle. Kubernetes Container Clusters UI plug-in CSE Management has a server setup process that automatically creates and publishes Kubernetes Clusters Rights Bundle to all tenants. The rights bundle contains all rights that are involved in Kubernetes cluster management in VMware Cloud Director Container Service Extension 4.0.
Updated - Resizing or upgrading Tanzu Kubernetes Grid cluster using kubectl.
After a cluster has been created in the Kubernetes Container Clusters UI plug-in, you can use kubectl to manage workloads on Tanzu Kubernetes Grid clusters.
If you also want to lifecycle manage, resize and upgrade the cluster through kubectl instead of the Kubernetes Container Clusters UI plug-in, complete the following steps:
Delete the RDE-Projector operator from the cluster kubectl delete deployment -n rdeprojector-system rdeprojector-controller-manager
Detach the Tanzu Kubernetes Grid cluster from Kubernetes Container Clusters UI plug-in.
In the VMware Cloud Director UI, in the Cluster Overview page, retrieve the cluster ID of the cluster.
Update the RDE with entity.spec.vcdKe.isVCDKECluster
to false.
Get the payload of the cluster - GET https://<vcd>/cloudapi/1.0.0/entities/<Cluster ID>
Copy and update the json path in the payload. - entity.spec.vcdKe.isVCDKECluster
to false.
PUT https://<vcd>/cloudapi/1.0.0/entities/<Cluster ID>
with the modified payload. It is necessary to include the entire payload as the body of PUT operation.
At this point the cluster is detached from VMware Cloud Director Container Service Extension 4.0.0 and 4.0.1, and it is not possible to manage the cluster through VMware Cloud Director Container Service Extension 4.0.0 and 4.0.1. It is now possible to use kubectl to manage, resize or upgrade the cluster by applying CAPI yaml, the cluster API specification, directly.