This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

VMware Cloud Director Container Service Extension 4.0.4 | 5 October 2023 | Build: 22547654

Check for additions and updates to these release notes.

What's New in October 2023

  • The Auto Repair on Errors toggle in the Tanzu Kubernetes Grid cluster creation workflow is deactivated by default in VMware Cloud Director Container Service Extension 4.0.4.

  • After a successful cluster creation, VMware Cloud Director Container Service Extension server deactivates the Auto Repair on Errors toggle automatically.

Documentation

To access the full set of product documentation, go to VMware Cloud Director Container Service Extension.

Upgrade

  • New - VMware Cloud Director Container Service Extension Server 4.0.4.

    Service providers can now upgrade the VMware Cloud Director Container Service Extension Server from 4.0.3 to 4.0.4 through the CSE Management tab in Kubernetes Container Clusters UI plug-in of VMware Cloud Director.

    To upgrade the VMware Cloud Director Container Service Extension server to 4.0.4, service providers must update the Kubernetes Cluster API Provider for VMware Cloud Director version to 1.0.1 in the Patch Version Upgrade workflow.

    Note:

    It is necessary to use Kubernetes Cluster API Provider for VMware Cloud Director 1.0.2 to attach Tanzu Kubernetes Grid 1.6.1 clusters to Tanzu Mission Control. For more information, see Attach Tanzu Kubernetes Grid 1.6.1 clusters to Tanzu Mission Control.

    For instructions on how to upgrade the VMware Cloud Director Container Service Extension Server from 4.0.3 to 4.0.4, see Patch Version Upgrade.

    You can download VMware Cloud Director Container Service Extension Server 4.0.4 from the VMware Cloud Director Container Service Extension Downloads page.

  • New - Kubernetes Container Clusters UI Plug-in 4.0.4 for VMware Cloud Director

    A new version of Kubernetes Container Clusters UI plug-in is now available to use with VMware Cloud Director.

    You can upgrade the Kubernetes Container Clusters UI plug-in before or after you upgrade the VMware Cloud Director Container Service Extension server.

    The following steps outline how to upgrade the Kubernetes Container Clusters UI plug-in from 4.0.3 to 4.0.4:

    1. Download the Kubernetes Container Clusters UI plug-in 4.0.4 from the VMware Cloud Director Container Service Extension Downloads page.

    2. In the VMware Cloud Director Portal, from the top navigation bar, select More > Customize Portal.

    3. Select the check box next to Kubernetes Container Clusters UI plug-in 4.0.3, and click Disable.

    4. Click Upload > Select plugin file, and upload the Kubernetes Container Clusters UI plug-in 4.0.4 file.

    5. Refresh the browser to start using the new plug-in.

    For more information, refer to Managing Plug-Ins.

Compatibility Updates

  • VMware Cloud Director Container Service Extension 4.0.4 supports Tanzu Kubernetes Grid 1.6.1.

  • Upgrade the Kubernetes Cluster API Provider for VMware Cloud Director version from 1.0.0 to 1.0.1 for clusters deployed in VMware Cloud Director Container Service Extension 4.0, 4.0.1 and 4.0.2.

    For VMware Cloud Director Container Service Extension 4.0.4 to support Tanzu Kubernetes Grid 1.6.1, it is necessary to use Kubernetes Cluster API Provider for VMware Cloud Director version 1.0.1.

    For new installations of VMware Cloud Director Container Service Extension 4.0.4, the default Kubernetes Cluster API Provider Cloud Director version is 1.0.1.

    Note:

    It is necessary to use Kubernetes Cluster API Provider for VMware Cloud Director 1.0.2 to attach Tanzu Kubernetes Grid 1.6.1 clusters to Tanzu Mission Control. For more information, see Attach Tanzu Kubernetes Grid 1.6.1 clusters to Tanzu Mission Control.

    For clusters that were deployed using VMware Cloud Director Container Service Extension 4.0, 4.0.1, and 4.0.2, the Kubernetes Cluster API Provider for VMware Cloud Director version in use is 1.0.0. In order for these clusters to be upgradeable to Tanzu Kubernetes Grid 1.6.1, it is necessary to upgrade the Kubernetes Cluster API Provider for VMware Cloud Director version to 1.0.1. It is necessary to perform the upgrade workflow for every cluster that is using Kubernetes Cluster API Provider for VMware Cloud Director 1.0.0.

    Complete the following step to perform the upgrade from Kubernetes Cluster API Provider for VMware Cloud Director 1.0.0 to 1.0.1:

    1. Download the cluster kube config:

      1. Log in to VMware Cloud Director, and from the top navigation bar, select More > Kubernetes Container Clusters.

      2. Select a cluster, and in the cluster information page, click Download Kube Config.

        For more information on Kube Config file, refer to the Kubernetes website.

    2. Use kubectl, and enter the following command:

    kubectl --kubeconfig=<path of kubeconfig> patch deployment -n capvcd-system capvcd-controller-manager -p '{"spec": {"template": {"spec": {"containers": [ {"name": "manager", "image": "projects.registry.vmware.com/vmware-cloud-director/cluster-api-provider-cloud-director:1.0.1"} ]}}}}'

    After you run the above command, a new Kubernetes Cluster API Provider for VMware Cloud Director pod begins to create with the newer version, and once it is running, the Kubernetes Cluster API Provider for VMware Cloud Director version is 1.0.1.

  • If it is necessary to attach Tanzu Kubernetes Grid 1.6.1 clusters to Tanzu Mission Control, use Kubernetes Cluster API Provider for VMware Cloud Director 1.0.2 for new installations of VMware Cloud Director Container Service Extension 4.0.4.

    If clusters must be attached to Tanzu Mission Control, service providers have to upgrade Kubernetes Cluster API Provider for VMware Cloud Director version from 1.0.0, 1.0.1 to 1.0.2. To complete this upgrade, follow the workflow in Upgrade the Kubernetes Cluster API Provider for VMware Cloud Director version.

    To attach VMware Cloud Director Container Service Extension clusters that were not created using Kubernetes Cluster API Provider for VMware Cloud Director 1.0.2 to Tanzu Mission Control, follow the workaround in Known Issues.

  • Important - In the Kubernetes Container Clusters 4.0.4 UI plug-in, the default Cloud Storage Interface version is 1.3.2.

    From April 3, 2023, the k8s.gcr.io registry is frozen and results in Cloud Storage Interface failures in the Kubernetes clusters. Such failures can occur when you attempt to scale a cluster or create a new cluster. Therefore, a new Cloud Storage Interface patch is released.

    For new installations of VMware Cloud Director Container Service Extension 4.0.2, 4.0.3 and 4.0.4, the default Cloud Storage Interface version is 1.3.2.

    For upgrades to VMware Cloud Director Container Service Extension 4.0.2, 4.0.3 and 4.0.4, service providers must update the Cloud Storage Interface image version to avoid failures when creating new clusters. Service providers can perform this task in the Update Configuration workflow in Server Details tab of Kubernetes Container Clusters UI plug-in. For more information, see Update the VMware Cloud Director Container Service Extension Server. For more information on k8s.gcr.io registry freezing, see https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/.

  • VMware Cloud Director Container Service Extension 4.0.4 interoperability updates

    To view the interoperability of VMware Cloud Director Container Service Extension 4.0.4 and previous versions with VMware Cloud Director, and additional product interoperability, refer to the Product Interoperability Matrix.

    The following table displays the interoperability between VMware Cloud Director Container Service Extension 4.0.4 and Kubernetes resources.

    Kubernetes Resources

    Supported Versions

    Documentation

    Kubernetes External Cloud Provider for VMware Cloud Director

    1.3.0, 1.2.0

    https://github.com/vmware/cloud-provider-for-cloud-director#kubernetes-external-cloud-provider-for-vmware-cloud-director

    Container Storage Interface driver for VMware Cloud Director Named Independent Disks

    1.3.2

    Note: From April 3, 2023, 1.3.2 is the only supported version of Cloud Storage Interface for new Tanzu Kubernetes Grid clusters due to the freezing of the k8s.gcr.io registry.

    https://github.com/vmware/cloud-director-named-disk-csi-driver#container-storage-interface-csi-driver-for-vmware-cloud-director-named-independent-disks

    Kubernetes Cluster API Provider Cloud Director

    1.0.2, 1.0.1, 1.0.0

    Note: It is recommended to use version 1.0.1 with VMware Cloud Director Container Service Extension 4.0.3. If you want to attach Tanzu Kubernetes Grid clusters to Tanzu Mission Control, use version 1.0.2.

    https://github.com/vmware/cluster-api-provider-cloud-director

    Service providers can manually update Kubernetes resources through the following workflow:

    1. In VMware Cloud Director UI, from the top navigation bar, select More > Kubernetes Container Clusters.

    2. In Kubernetes Container Clusters UI plug-in 4.0.4, select CSE Management > Server Details > Update Server > Update Configuration > Next.

    3. In the Current CSE Server Components section, update the Kubernetes resources configuration.

    4. Click Submit Changes.

    For more information, see Update the VMware Cloud Director Container Service Extension Server.

Resolved Issues

  • New - In some systems, a ScriptExecutionError can occur during the cluster creation workflow, and can cause the operation to fail.

    The log will contain strings of the following form:

    kubectl apply -f
    Internal error occurred: failed calling webhook

    The errors indicate that VMware Cloud Director Container Service Extension attempted to install a package before the webhooks for the operator were started. In the fix, installation is reattempted so that the package gets installed after the webhooks are started.

    This issue has been fixed for VMware Cloud Director Container Service Extension 4.0.4.

  • When a force delete attempt of a cluster fails, the ForceDeleteError that displays in the Events tab of the cluster info page does not provide sufficient information regarding the failure to delete the cluster.

    This issue is fixed for VMware Cloud Director Container Service Extension 4.0.3 release.

  • Docker access error occurs in VMware Cloud Director Container Service Extension that has a proxy configured.

    The following error appears:

    /var/log/cloud-final.err on Ephemeral vm with the following details:

    ERROR: failed to create cluster: failed to pull image "kindest/node:v1.24.0@sha256:0866296e693efe1fed79d5e6c7af8df71fc73ae45e3679af05342239c dc5bc8e": command "docker pull kindest/node:v1.24.0@sha256:0866296e693efe1fed79d5e6c7af8df71fc73ae45e3679af05342239cdc5bc8e" failed with error: exit status 1 <13>Apr 4 06:13:00 demo01-tkg@Demo01/demo01tkgadmin: Command Output: Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection

    This issue is fixed for VMware Cloud Director Container Service Extension 4.0.3 release.

  • Docker install fails at some customer regions, and does not allow cluster creation and deletion in VMware Cloud Director Container Service Extension.

    The following error appears:

    /var/log/cloud-final.err on Ephemeral vm with the following details:

    ERROR: failed to get docker info: command \"docker info --format 'json .'\" failed with error: exec: \"docker\": executable file not found in $PATH"

    This issue is fixed for VMware Cloud Director Container Service Extension 4.0.3 release. 

  • The cluster creation for multi-control plane or multi-worker node goes into an error state. The Events tab in the cluster details page shows an EphemeralVMError event due to the failure to delete ephemeralVM in VMware Cloud Director.

    The same error events can appear repeatedly if the Auto Repair on Errors setting is activated on the cluster. If the Auto Repair on Errors setting is off, sometimes the cluster can show an error state due to the failure to delete the ephemeralVM in VMware Cloud Director even though the control plane and worker nodes are created successfully.

    This issue is visible in any release and patch release after but not including VMware Cloud Director 10.3.3.3, and any release and patch release starting with VMware Cloud Director 10.4.1.

    This issue is fixed for VMware Cloud Director Container Service Extension 4.0.3 release.

Known Issues

  • VMware Cloud Director Container Service Extension 4.0.3 Tanzu Kubernetes Grid 1.6.1 clusters that are created with Kubernetes Cluster API Provider Cloud Director 1.0.1 are not compatible with Tanzu Mission Control.

    Workaround:

    1. Download kubeconfig, and run the following command:

      export KUBECONFIG=<path to downloaded kubeconfig>

    2. Access the tkg-metadata configmap, and run the following command:

      kubectl edit cm -n tkg-system-public tkg-metadata

    3. Change the type: management to type: workload, and save.

  • In some instances, nodes cannot join clusters. This occurs randomly due to intermittent issues, even when the cluster is in an available state.

    The following error appears in the Events tab of the cluster info page in Kubernetes Container Clusters UI:

    VcdMachineScriptExecutionError with the following details:

    script failed with status [x] and reason [Date Time 1 /root/node.sh: exit [x]]

    Workaround:

    For VMware Cloud Director Container Service Extension 4.0.3, there is a retry mechanism added that uses a retry feature from Cluster API.

  • An ephemeral VM is created during the cluster creation process, and is deleted by VMware Cloud Director Container Service Extension when the cluster creation process is complete. It is possible that the API request to delete the ephemeral VM can fail.

    VMware Cloud Director Container Service Extension reattempts to delete the ephemeral VM for up to 15 minutes. In an event that VMware Cloud Director Container Service Extension fails to delete the ephemeral VM after reattempting, it leaves the ephemeral VM in the cluster's VApp without deleting it.

    The following error appears in the Events tab of the cluster info page in Kubernetes Container Clusters UI:

    EphemeralVMError with the following details:

    error deleting Ephemeral VM [EPHEMERAL-TEMP-VM] in vApp [cluster-vapp-name]: [reason for failure]. The Epemeral VM needs to be cleaned up manually.

    The reason for failure depends on the stage at which the ephemeral VM deletion failed. Once you observe this notification, it is safe to delete the ephemeral VM from the cluster's VApp in the VMware Cloud Director UI.

    Workaround:

    1. Log in to the VMware Cloud Director Tenant Portal, and from VMware Cloud Director navigation menu, select Data Centers.

    2. In the Virtual Data Center page, select the organization tile, and from the left navigation menu, select vApps.

    3. In the vApps page, select the vApp of the cluster.

    4. In the cluster information page, click the ellipse to the left of the Ephermal VM, and click Delete.

    However, if the ephemeral VM is not manually cleaned up, and if a delete request is issued, the cluster delete operation fails. It is then necessary to force delete the cluster.

    1. Log in to VMware Cloud Director, and from the top navigation bar, select More > Kubernetes Container Clusters.

    2. Select a cluster, and in the cluster information page, click Delete.

    3. In the Delete Cluster page, select the Force Delete checkbox, and click Delete.

  • It is not possible to create clusters in VMware Cloud Director Container Service Extension 4.0.x when using a direct organization VDC network with NSX in VMware Cloud Director.

    VMware Cloud Director Container Service Extension 4.0.x clusters do not support this configuration.

  • The Kubernetes Container Clusters UI Plugin storage profile selection form fields do not filter storage policies by entitytype.

    The storage profile selection form fields display all storage profiles visible to the logged-in user, such as VMs, vApps, Catalog items, or named disks. The Kubernetes Container Clusters UI should only display storage profiles specified for VMs and vApps.

  • In VMware Cloud Director Container Service Extension, the creation of Tanzu Kubernetes Grid clusters can fail due to a script execution error.

    The following error appears in the Events tab of the cluster info page in Kubernetes Container Clusters UI:

    ScriptExecutionTimeout with the following details:

    error while bootstrapping the machine [cluster-name/EPHEMERAL_TEMP_VM]; timeout for post customization phase [phase name of script execution]

    Workaround:

    When this error occurs, it is recommended to activate Auto Repair on Errors from cluster settings. This instructs VMware Cloud Director Container Service Extension to reattempt cluster creation.

    1. Log in to VMware Cloud Director, and from the top navigation bar, select More > Kubernetes Container Clusters.

    2. Select a cluster, and in the cluster information page, click Settings, and activate the Auto Repair on Errors toggle.

    3. Click Save.

    Note:

    It is recommended to deactivate the Auto Repair on Errors toggle when troubleshooting cluster creation issues.

  • In Kubernetes Container Clusters UI plug-in, the cluster delete operation can fail when the cluster status is Error.

    To delete a cluster that is in Error status, it is necessary to force delete the cluster.

    1. Log in to VMware Cloud Director, and from the top navigation bar, select More > Kubernetes Container Clusters.

    2. Select a cluster, and in the cluster information page, click Delete.

    3. In the Delete Cluster page, select the Force Delete checkbox, and click Delete.

  • ERROR: failed to create cluster: failed to pull image failure

    This error occurs in the following circumstances:

    • When a user attempts to create a Tanzu Kubernetes Grid Cluster using VMware Cloud Director Container Service Extension 4.0, and it fails intermittently.

    • An image pull error due to a HTTP 408 response is reported.

    This issue can occur if there is difficulty reaching the Internet from the EPHEMERAL_TEMP_VM to pull the required images.

    Potential causes:

    • Slow or intermittent Internet connectivity.

    • The network IP Pool cannot resolve DNS (docker pull error).

    • The network MTU behind a firewall must set lower.

    To resolve the issue, ensure that there are no networking connectivity issues stopping the EPHEMERAL_TEMP_VM from reaching the Internet.

    For more information, refer to https://kb.vmware.com/s/article/90326.

  • Users may encounter authorization errors when executing cluster operations in Kubernetes Container Clusters UI plug-in if a Legacy Rights Bundle exists for their organization.

    • After you upgrade VMware Cloud Director from version 9.1 or earlier, the system may create a Legacy Rights Bundle for each organization. This Legacy Rights Bundle includes the rights that are available in the associated organization at the time of the upgrade and is published only to this organization. To begin using the rights bundles model for an existing organization, you must delete the corresponding Legacy Rights Bundle. For more information, see Managing Rights and Roles.

    • In the Administration tab in the service provider portal, you can delete Legacy Rights Bundles. For more information, see Delete a Rights Bundle. Kubernetes Container Clusters UI plug-in CSE Management has a server setup process that automatically creates, and publishes Kubernetes Clusters Rights Bundle to all tenants. The rights bundle contains all rights that are involved in Kubernetes cluster management in VMware Cloud Director Container Service Extension 4.0.

  • Resizing or upgrading a Tanzu Kubernetes Grid cluster using kubectl.

    After a cluster has been created in the Kubernetes Container Clusters UI plug-in, you can use kubectl to manage workloads on Tanzu Kubernetes Grid clusters.

    If you also want to lifecycle manage, resize and upgrade the cluster through kubectl instead of the Kubernetes Container Clusters UI plug-in, complete the following steps:

    1. Delete the RDE-Projector operator from the cluster kubectl delete deployment -n rdeprojector-system rdeprojector-controller-manager

    2. Detach the Tanzu Kubernetes Grid cluster from Kubernetes Container Clusters UI plug-in.

      1. In the VMware Cloud Director UI, in the Cluster Overview page, retrieve the cluster ID of the cluster.

      2. Update the RDE with entity.spec.vcdKe.isVCDKECluster to false.

        1. Get the payload of the cluster - GET https://<vcd>/cloudapi/1.0.0/entities/<Cluster ID>

        2. Copy and update the json path in the payload. - entity.spec.vcdKe.isVCDKECluster to false.

        3. PUT https://<vcd>/cloudapi/1.0.0/entities/<Cluster ID> with the modified payload. It is necessary to include the entire payload as the body of PUT operation.

      3. At this point the cluster is detached from VMware Cloud Director Container Service Extension 4.0.0 and 4.0.1, and it is not possible to manage the cluster through VMware Cloud Director Container Service Extension 4.0.0 and 4.0.1. It is now possible to use kubectl to manage, resize or upgrade the cluster by applying CAPI yaml, the cluster API specification, directly.

  • Cluster creation fails in VMware Cloud Director Container Service Extension due to invalid GitHub Token with Error: 401 Bad Credentials.

    This is the expected error during cluster creation. If customers set invalid Github access token, the cluster creation fails and the following error appears:

    error creating the GitHub repository client: failed to get GitHub latest version: failed to get repository 
    versions: failed to get repository versions: failed to get the list of releases: GET 
    https://api.github.com/repos/kubernetes-sigs/cluster-api/releases: 401 Bad credentials

    When you configure the VMware Cloud Director Container Service Extension server, enter an accurate Github access token.

  • Policies selection in VMware Cloud Director Container Service Extension 4 plug-in does not populate the full list after selection for the purpose of policy modification.

    When a user selects a sizing policy in the Kubernetes Container Clusters plug-in and they want to change it, the dropdown menu only displays the selected sizing policy, and does not automatically load alternative sizing policies.

    The user has to delete the text manually to allow the alternative sizing policies to appear. This also occurs in the dropdown menu when the user selects of placement policies and storage policies.

    This is intentional. This is how the combobox html, Clarity, web component works.

    Note:Clarity is the web framework that VMware Cloud Director UI is built on.

    The dropdown box uses the input text as a filter. When nothing is in the input field, you can see all selections, and the selections filter as you type.

  • When you create a VMware Cloud Director Container Service Extension cluster, a character capitalization error appears.

    In the Kubernetes Container Clusters UI, if you use capital letters, the following error appears:

    • Name must start with a letter, end with an alphanumeric, and only contain alphanumeric or hyphen (-) characters. (Max 63 characters)

    This is a restriction set by Kubernetes. Object names are validated under RFC 1035 labels. For more information, refer to Kubernetes website.

  • Kubernetes Container Clusters UI-Plugin 4.0 does not interoperate with other Kubernetes Container Clusters UI plug-ins, such as 3.5.0.

    The ability to operate these two plug-ins simultaneously without conflict is a known VMware Cloud Director UI limitation. You can only have one plug-in activated at any given time.

  • When a node of the cluster is deleted due to failure in vSphere or other underlying infrastructure, VMware Cloud Director Container Service Extension does not inform the user, and it does not auto-heal the cluster.

    When the node of a cluster is deleted, basic cluster operations, such as cluster resize and cluster upgrade, continue to work. The deleted node remains in deleted state, and is included in computations regarding size of the cluster.

    1. Download the Kubeconfig of the cluster.

    2. Use the following command to delete the machine that continues to use the deleted node configuration:

    kubectl --kubeconfig=<path to downloaded kubeconfig> get machines -A # try to match the machine name 
    here; also get namespace 
    kubectl -n <namespace name from above> --kubeconfig=<path to downloaded kubeconfig> delete machine 
    <machine name> 
    # wait for machine to get deleted

    The above command deletes the machine, and CAPVCD automatically creates a new machine.

  • VMware Cloud Director Container Service Extension fails to deploy clusters with TKG templates that have an unmodifiable placement policy set on them.

    1. Log in to the VMware Cloud Director Tenant Portal as an administrator.

    2. Click Libraries > vApp Templates.

    3. In the vApp Templates window, select the radio button to the left of the template.

    4. In the top ribbon, click Tag with Compute Policies.

    5. Select the Modifiable checkboxes, and click Tag.

  • In VMware Cloud Director 10.4, service providers are unable to log-in to the VMware Cloud Director Container Service Extension virtual machine by default.

    In VMware Cloud Director 10.4, after deploying the VMware Cloud Director Container Service Extension virtual machine from OVA file, the following two checkboxes in the VM settings page are not selected by default:

    • Allow local administrator password

    • Auto-generate password

    It is necessary to select these checkboxes to allow providers to log-in to the VMware Cloud Director Container Service Extension virtual machine in future to perform troubleshooting tasks.

    1. Log in to VMware Cloud Director UI as a service provider, and create a vApp from the VMware Cloud Director Container Service Extension OVA file. For more information, see Create a vApp from VMware Cloud Director Container Service Extension server OVA file.

    2. Once you deploy the vApp, and before you power it on, go to VM details > Guest OS Customization > Select Allow local administrator password and Auto-generate password.

    3. After the vApp update task finishes, power on the vApp.

  • Fast provisioning must be deactivated in Organization VDC in order to resize disks.

    1. Log in to VMware Cloud Director UI as a provider, and select Resources.

    2. In the Cloud Resources tab, select Organization VDCs, and select an organization VDC.

    3. In the organization VDC window, under Policies, select Storage.

    4. Click Edit, and deactivate the Fast provisioning toggle.

    5. Click Save.

  • When you log in as a service provider, after you upload the latest UI plug-in, the CSE Management tab does not display.

    Deactivate the previous UI plug-in that is built into VMware Cloud Director.

    1. Log in to VMware Cloud Director UI as a provider, and select More > Customize Portal.

    2. Select the check box next to the names of the target plug-ins, and click Enable or Disable.

    3. To start using the newly activated plug-in, refresh the Internet browser page.

    Note:

    If there are multiple activated plugins with the same name or id but different version, the lowest version plug-in is used. Therefore, only activate the highest version plug-in. Deactivate all other version plug-ins.

    For more information on managing plug-ins, see Managing Plug-Ins.

check-circle-line exclamation-circle-line close-line
Scroll to top icon