This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

Updated on: 8 DEC 2021

VMware Telco Cloud Automation | 06 APR 2021 | R146 | Build 17830999

 

What's in the Release Notes

The release notes cover the following topics:

What's New

This section contains information about the new features of VMware Telco Cloud Automation for this release.

Interoperability Matrix for VMware Telco Cloud Automation 1.9

Product Supported Versions
VMware vSphere 7.0 U2
VMware Integrated OpenStack 7.0.1
VMware Tanzu Kubernetes Grid 1.3
VMware Cloud Director 10.2
Kubernetes 1.17.16, 1.18.16, 1.19.8, 1.20.4
VMware NSX-T 3.1.1
VMware vRealize Orchestrator 8.3

Components Deployed Through Infrastructure Automation

Product Version
Cloud Builder 4.2
VMware Telco Cloud Automation 1.9.0
VMware NSX-T 3.1.0
vCenter Server 7.0 U1
VMware ESXi 7.0 U1d
vRealize Orchestrator 8.3
vRealize Log Insight 8.3

Infrastructure Automation

  • Certificate Management
    • Register a Certificate Authority (CA) and generate a Certificate Signing Request (CSR) for the domain.
    • Support for Self-signed CA through the Infrastructure Automation process.
  • Support for Management and Workload domains
    • VMware Telco Cloud Automation does not create a separate SDDC (vCenter Server + VMware NSX + VMware vSAN) for management appliances called the Management domain. Instead, the vCenter Server + VMware NSX domain is deployed as VMs in the Management domain. The ESXi of the Workload domain is fully available for running customer workloads.
  • Support for a pre-deployed domain
    • VMware Telco Cloud Automation supports previous deployments that were not done through Infrastructure Automation. You can now import a pre-deployed vCenter Server + VMware NSX environment or just a vCenter Server environment to Infrastructure Automation. You can then deploy new clusters or cell site groups in this environment.
  • VMware vSAN Network File System (vSAN NFS) is optional on the Compute cluster
    • VMware vSAN is now an optional component in the Infrastructure Automation process for Workload clusters. However, it is mandatory on Management clusters.
  • Support for DHCP Pool for application networks.
    • VMware Telco Cloud Automation supports DHCP on application networks that are created by Infrastructure Automation if VMware NSX is enabled for the domain. This satisfies the requirement that the Kubernetes Management network must be created through Infrastructure Automation and the network requires DHCP.

CaaS Infrastructure and CNFs

  • VMware Tanzu Kubernetes Grid 1.3 Support
    • VMware Telco Cloud Automation now enables you to deploy Kubernetes clusters based on VMware Tanzu Kubernetes Grid 1.3.
    • You can also migrate existing deployed Kubernetes clusters to VMware Tanzu Kubernetes Grid 1.3.
    • The haproxy-based VMware Tanzu Kubernetes Grid 1.1.3 clusters can now be migrated to VMware Tanzu Kubernetes Grid 1.3 clusters seamlessly.
  • CaaS Node Customization (Late-Binding)
    • Design a CNF with infrastructure requirements customizations such as custom packages, network adapters, and kernels using the GUI designer.
    • Enhanced Central CLI (CCLI) that supports the vmconfig and nodeconfig operator CLIs.
  • Schema validation for a CNF catalog
    • VMware has introduced a schema validation feature for validating the Network Function and Network Service schema when onboarding a catalog. This feature is required for custom extensions.
  • Machine Health Check for VMware Tanzu Kubernetes Grid Clusters
    • The Machine Health Check feature provides node health monitoring and node auto-repair for VMware Tanzu Kubernetes Grid clusters.
    • Machine Health Check monitors the node pools for any unhealthy nodes and tries to remediate by recreating them.
    • Machine Health Check can now be configured per node pool.  
  • Maintenance Mode for VMware Tanzu Kubernetes Grid Clusters
    • You can now place node pools in the maintenance mode when performing maintenance activities.
    • This option prevents Machine Health Check from remediating the node pools during downtime.
  • Improved Precision Time Protocol (PTP) Support
    • You can now customize PTP from the Network Function Catalog. VMware Telco Cloud Automation updates the PTP configuration on the target Worker nodes before instantiating the Network Function (late-binding).
    • You can configure PTP services such as ptp4l and phc2sys through the CSAR file.
  • Cancel and Retry a Cluster Creation Operation
    • You can now Abort an ongoing cluster creation operation or retry a failed cluster creation operation.
    • The Abort operation cleans up used resources. It is not supported on Management clusters.
    • The Retry operation resumes the CaaS cluster creation operation from the last checkpoint that is successfully completed.
  • Copy the specifications and deploy new CaaS clusters from an existing cluster.
  • Embedded Helm 3.x support.
    • Helm 3.x is now embedded in Kubernetes clusters and the option to select Helm 3.x is removed from cluster templates.
  • Referencing support for custom workflows.
    • VMware Telco Cloud Automation improves the usage of pre-instantiation Workflow outputs. They can now be used as inputs for Custom workflow as well.
  • The General Info tab provides detailed information about CNF instances.
  • Remote logging for CNFs
    • You can now forward binary logs and core dumps from CNFs to a remote MinIO (high-performance object storage service for Kubernetes.

VIMs, VNFs, and Network Service Lifecycle Management

  • Support for vApp templates for VCD based environments:
    • When instantiating a VNF, you can now use vApp Templates as images for a VMware Cloud Director-based cloud. The VIM does not require vCenter access.
    • VMware Telco Cloud Automation is now VMware Cloud Director-aware. You can now select a specific VMware Cloud Director catalog during the VNF instantiation.
    • The VIM admin does not require access to VMware vCenter.
    • There is a limitation on the number of virtual machines per vApp template. The vApp template can contain only one virtual machine. 
    • VMware Telco Cloud Automation supports backward compatibility of using vCenter VM templates. However, this feature will be deprecated in the future.
  • You can now configure scaling aspects and instantiation levels for the VDU instances in a VNF using a GUI designer.
    • You can now view the Scale Policies of VNF catalogs from the VMware Telco Cloud Automation web interface.
  • Schema validation for VNF catalog and Network Service TOSCA descriptors.
    • VMware Telco Cloud Automation performs a TOSCA scheme validation when onboarding VNF or Network Service packages.
    • This feature reduces the instantiation error and assists network service designers in onboarding their applications.
  • Referencing support for custom workflows.
    • VMware Telco Cloud Automation improves the usage of pre-instantiation Workflow outputs. They can now be used as inputs for Custom workflow as well.
  • If the virtual infrastructure inventory information is not synchronized between VMware Telco Cloud Automation Control Plane (TCA-CP) and VMware Telco Cloud Automation Manager, you can now initiate partial sync or full sync.
  • The General Info tab provides detailed information about VNF instances.

VMware Telco Cloud Automation Platform Robustness and Other Improvements

  • Licensing Updates
    • Evaluation licenses are no longer limited by the number of managed VNFs and CNFs, but they are time-based. The default evaluation time is 120 days.
  • Enhanced SOL Compliance
    • Rebased the API to support SOL interfaces version 2 
    • Rebased the UI to support SOL Interfaces version 2
  • You can now reboot an appliance from the Appliance Management user interface.
  • You can now update the admin user password using the Appliance Management user interface.
  • You can now select specific VMware Tanzu Kubernetes Grid clusters to collect logs.
  • The Network Function instances are now paginated. This enhances the usability of the Network Function inventory for large deployments.
  • Enhanced Tag Support
    • Enhanced flexibility and isolation among users based on vendors, products, and more.
    • Using RBAC filtering based on tags, you can now group objects and filter users according to business attributes.
  • VMware Tanzu Kubernetes Grid 1.3 provides an increased minimum node size requirement.

Important Actions and Notes

  • CaaS Upgrades for VMware Tanzu Kubernetes Grid Clusters deployed in VMware Telco Cloud Automation version 1.7
    • Starting from VMware Telco Cloud Automation 1.9, (which adopts VMware Tanzu Kubernetes Grid 1.3.0), VMware Tanzu Kubernetes Grid deprecates the HA Proxy API server load balancer for a Workload cluster. The migration from haproxy to kube-vip is automated and the haproxy IP address is retrieved from the DHCP server. However, since kube-vip is a static IP address, ensure that you remove the endpoint IP address from the DHCP server after upgrading the Workload cluster.
  • Ensure that you import the VIO certificate manually before and after upgrading to VMware Telco Cloud Automation 1.9.
    • Starting from version 1.9, VMware Telco Cloud Automation performs stricter checks for certificates of the components it interfaces with. These checks impact the existing TCA-CP deployments for VIO.
  • Network Service > Heal: For ensuring compliance with ETSI SOL APIs, the Custom Heal action is discontinued starting from VMware Telco Cloud Automation 1.9. Other options such as HEAL_RESTORE, HEAL_QOS, HEAL_RESET, and PARTIAL_HEALING are supported.
  • It is recommended to install ESXi 7.0.U2 on the RAN site. At present, the upgrade from ESXi 7.0 U1 to U2 is impacted. For details, see KB https://kb.vmware.com/s/article/83107?lang=en_US.
  • Only VMware Telco Cloud Automation-certified Helm versions are recommended during cluster creation. Support for all other Helm versions has been deprecated from VMware Telco Cloud Automation version 1.9.
    For VMware Telco Cloud Automation version 1.9, the Helm version 2.17.0 is required only for CNFs with v2 Helm charts. By default, Helm v3 is embedded in the clusters deployed through VMware Telco Cloud Automation version 1.9. There is no requirement to specify Helm v3 explicitly within the cluster template.
  • Multiple VMware Tanzu Kubernetes Grid Virtual Machine Templates:
    VMware Telco Cloud Automation version 1.9 supports two different versions of VMware Tanzu Kubernetes Grid virtual machine templates:
    • Minimum disk size of 30 GB (Recommended) - To be used for any fresh CaaS deployments or for Kubernetes cluster upgrades where the minimum disk size is at least 30 GB.
    • Minimum disk size of 20 GB - To be used to upgrade only those Kubernetes clusters that have at least one node with a storage size of less than 30 GB.
      Note: To enhance user experience during Cluster Upgrades, VMware Telco Cloud Automation recommends renaming the VMware Tanzu Kubernetes Grid virtual machine templates in a way that they can be identified easily.
  • Important: It is recommended to deploy clusters with each node having a minimum disk size of 50 GB.
  • Deprecation Notice for Clusters with Storage Less Than 30 GB:
    • VMware Telco Cloud Automation version 1.9 deprecates Kubernetes clusters deployed with storage space that is less than 30 GB. The support for upgrading such clusters will be removed in a future release. It is recommended to deploy alternate clusters with larger storage allocated per node, and retire any clusters that have nodes with storage space that is less than 30 GB.
    • For Kubernetes clusters that contain master nodes with more than 30 GB and only specific node pools with less than 30 GB, you can recreate those node pools with a higher storage allocation.
    • This impacts those Kubernetes clusters that were deployed in VMware Telco Cloud Automation versions 1.7 or 1.8.

Static IP Address Requirement for Kubernetes Control Plane

A set of static virtual IP addresses must be available for all the clusters that you create, including both Management and Tanzu Kubernetes Grid clusters.

  • Every cluster that you deploy to vSphere requires one static IP address for Kube-Vip to use for the API server endpoint. You specify this static IP address when you deploy a management cluster. Make sure that these IP addresses are not in the DHCP range but are in the same subnet as the DHCP range. Before you deploy management clusters to vSphere, make a DHCP reservation for Kube-Vip on your DHCP server. Use an auto-generated MAC Address when you make the DHCP reservation for Kube-Vip so that the DHCP server does not assign this IP to other machines.
  • Each control plane node of every cluster that you deploy requires a static IP address. This includes both Management clusters and Tanzu Kubernetes Grid clusters. These static IP addresses are required in addition to the static IP address that you assign to Kube-Vip when you deploy a management cluster. To make the IP addresses that your DHCP server assigned to the control plane nodes static, you can configure a DHCP reservation for each control plane node in the cluster, after you deploy it. For instructions on how to configure DHCP reservations, see your DHCP server documentation.

For more information, see the VMware Tanzu Kubernetes Grid documentation at https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/index.html.

CaaS Upgrade Checklist

To help ensure a successful upgrade to VMware Telco Cloud Automation 1.9, review the following checklists.

Pre Upgrade

Analyze the following points in the existing deployment:

  1. Note down the deployment specifications for all the Kubernetes clusters that were deployed as part of VMware Telco Cloud Automation version 1.8 or earlier.
  2. Placement - Compare the deployment and ensure that the virtual machine paths of the cluster such as resource pools and virtual machine folders exist. Also, ensure that the clusters are placed within the folders specified in the deployment specification.
  3. Networks - Compare the deployment and ensure that the cluster virtual machines are attached to the correct networks that were selected during cluster deployment.
  4. Storage - Verify that the minimum storage size for each Master and Worker node within the cluster is at least 30 GB. If the storage size for the Master and Worker nodes is less than 30 GB, you must use a separate VM template for upgrading these clusters. 

Upgrade

  1. Upgrade VMware Telco Cloud Automation Manager and all its registered VMware Telco Cloud Automation Control Plane (TCA-CP) nodes to version 1.9 together.
    • After upgrading VMware Telco Cloud Automation Manager to version 1.9 and before performing any operations on the user interface, ensure that all the registered TCA-CP nodes are upgrading to version 1.9.
  2. Force-refresh the VMware Telco Cloud Automation Manager user interface after upgrading to version 1.9.

Post Upgrade

Infrastructure:

  1. Upgrading Kubernetes clusters that contain Master and Worker nodes with less than 30 GB of storage space (disk size):
    • Important - While upgrading, ensure that the VMware Tanzu Kubernetes Grid virtual machine template has 20 GB of storage space. This is a critical requirement because using the wrong template can cause an upgrade failure and cluster loss with no recovery. You may have to recreate the cluster in such a scenario.
  2. Upgrading Kubernetes clusters that contain Master and Worker nodes with greater than 30 GB of storage space (disk size):
    • Important - While upgrading, ensure that the VMware Tanzu Kubernetes Grid virtual machine template has 30 GB of storage space. Use the virtual machine template that is recommended by VMware.

Customization and CNF Lifecycle Management:

  1. The CSARs that worked in version 1.8 may not work after the upgrade as additional validations are introduced in version 1.9:
    • Reason: In version 1.8 and previous releases, VMware Telco Cloud Automation ignored invalid and erroneous sections in the CSAR.
    • Workaround: Use the CSAR Designer in the VMware Telco Cloud Automation user interface to update or correct the CSAR, or create a new CSAR.
  2. You may experience stability issues with Columbiaville NICs when using vfio-pci for binding DPDK devices. The Worker nodes may receive  IOMMU faults.
    • Workaround: Use igb_uio instead of vfio-pci.
  3. The recommended Photon version for Workloads is Linux-4.19.177-2.ph3 and Linux-rt-4.19.177-2.ph3 (for nodes that require customization). Ensure that you update the relevant sections within the CSAR / infra-requirements file.
  4. To use the improved automation features for PTP, Tuned, stalld, and so on, update the relevant sections of the CNF CSAR file.

Security Fixes

The following security issues were fixed in VMware Telco Cloud Automation version 1.9.

Number Summary
1

OpenSSL is now upgraded to version 1.1.1k.

This upgrade fixes CVE-2021-3449 - An OpenSSL TLS server may crash if sent a maliciously crafted renegotiation ClientHello message from a client.

2

VMware Telco Cloud Automation allows specifying a log-on disclaimer. 

With this feature, VMware Telco Cloud Automation administrators now can specify disclaimers and messages to be displayed when logging in to the Web UI and SSH sessions.

3

Improper certificate validation.

Fixed improper certificate validation for VMware Integrated OpenStack connections.

4

Web server security hardening.

Response from the login API is hardened to avoid potential account enumeration.

5

Improved validation of backup bundle.

Validation of the backup bundle is improved during a restore operation.

6

Improved processing of uploaded files.

Processing of uploaded files is hardened to prevent malicious or malformed archives.

7

Web server security hardening.

The web server is hardened to prevent path traversal vulnerability.

8

Improved processing of uploaded files.

Processing of uploaded files is hardened to prevent path traversal.

9

Improved validation of upgrade bundle.

Validation of the upgrade bundle is improved.

10

Sudo upgraded to version 1.8.32.

Fixes heap-based buffer overflow in Sudo (CVE-2021-3156).

11

XML security hardening.

XML parsing is hardened to prevent XML External Entity injection.

12

Infrastructure Automation security hardening.

The Appliance Password field is removed from the downloaded configuration spec for any fully provisioned domain. 

 

Resolved Issues

  • Incorrect version displays during one of intermediate upgrade steps

    During an upgrade, the VMware Telco Cloud Automation Appliance Manager UI displays the version as 4.8.0 instead of 1.8.0.

    However, this is a known issue and appears only in an intermediate step. After a successful upgrade, the correct version 1.8.0 is displayed.

  • Incorrect version displays during one of intermediate upgrade steps

    During upgrade, the VMware Telco Cloud Automation Appliance Manager UI displays the version as 4.8.0 instead of 1.8.0.

         Steps to upgrade from 1.7.0 build <> to 4.8.0 build <>

    However, this is a known issue and appears only in an intermediate step. After a successful upgrade, the correct version 1.8.0 is displayed.

  • NSX overlay network created by Infrastructure Automation uses an auto-assigned Transport Zone.

    While creating an overlay application network, the Transport Zone (TZ) is auto-assigned. It causes issues if a wrong TZ is picked.

     

  • Kubernetes Cluster node management IP address issue.

    Kubernetes Cluster node management IP Address might not be visible in VMware Telco Cloud Automation after NodePool / Cluster upgrade (Networking data path is not impacted by this issue).

  • When deleting a node pool in "create failed" state, it fails with "Node Pool Deletion timed out for node".

    Node Pool creation might timeout in TCA UI in certain cases. The task is not complete on vSphere, so deletion of the node pool might cause conflicts and result in errors.

  • Generating a tech support log bundle from a TCA-CP node takes more than 20 mins.

    Tech Support bundle generation on TCA-CP takes more than 20 minutes when there are a lot of Kubernetes clusters deployed.

     

  • Harbor partner entry does not automatically associate with Kubernetes cluster if Harbor information is provided during cluster creation.

    During CNF instantiation, the Harbor repository is listed in the drop-down menu even after harbor configuration is provided during the Kubernetes cluster creation.

     

  • Designing a new catalog with source files from 1.7 fails.

    VMware Telco Cloud Automation 1.8 has aligned with the SOL001 way of specifying the NFD descriptor file. In version 1.7 (and prior), interfaces, infra_requirements, and properties were duplicated and were present at two places for the NF node. Starting 1.8, these are required to be specified only under the node_templates section. The updated Catalog Designer in 1.8 will only design catalogs in this manner.
    This behavior is also occasionally seen when editing catalogs defined in 1.7 (or prior) and saving them as a new catalog.
    Error observed: descriptor_id should be same in substitution_mappings and node_templates

  • CNF instantiation fails when CNF CSAR contains a tuned.conf file larger than 1 KB

    While instantiating a CNF which has Node Customization enabled with tuned configurations, if the packaged tuned.conf file is greater than 1 KB in size, the instantiation fails.

  • Instantiating a 1.7 CNF with tuned customizations fails with the error: 'Node Pool customization is blocked'

    VMware Telco Cloud Automation 1.8 changes the way tuned requirements and customizations are specified in the CNF CSAR. (see the Upgrade Considerations section).

  • Grant validations fail when instantiating VNFs on VMware Cloud Director-based Clouds that are backed with multiple clusters.

    Grant validations fail for deployments on VMware Cloud Director environments that have Provider VDCs backed by multiple clusters or Resource Pools.

  • VNF deployments fail when instantiating VNFs on VMware Cloud Director-based Clouds that are backed with VDS-based NSX-T.

    VM clone operations fail for VNF deployments on VMware Cloud Director environments whose NSX-T is backed by VDS-based switches in vCenter.

Known Issues

The known issues are grouped as follows.

RBAC
  • Infrastructure LCM privilege user cannot delete a Kubernetes cluster without the Virtual Infrastructure Admin privilege.

    A user with only Infrastructure LCM Privilege cannot delete a Kubernetes cluster deployed through VMware Telco Cloud Automation.

    Workaround: Add the Virtual Infrastructure Admin privilege.

  • Edit of Workload cluster fails if user does not have access to the corresponding Management cluster.

    A VMware Tanzu Kubernetes Grid Admin user is not able to edit the Kubernetes cluster configuration when the advanced filter is applied to the Kubernetes cluster instance name.

    Workaround: To manage a single Kubernetes Workload cluster as a VIM, a user must have at least Read-Only access to the corresponding Kubernetes Management cluster. For VIM Read-Only access on the Kubernetes Management cluster, the user can create new permission with the Virtual Infrastructure Auditor role and the Management cluster as the VIM filter.

  • CNF Instantiation with Node Customization does not require Infrastructure LCM / Virtual Infrastructure Admin privileges.

    While instantiating a CNF with node customization, a user with only VIM Consume privileges can perform this operation.

    Ideally, Infrastructure LCM / Virtual Infrastructure Admin privileges are required.

Appliance Management
  • NSX-V certificates must be manually re-imported to TCA-CP environments

    Starting VMware Telco Cloud Automation 1.9, you must import certificates to VMware Telco Cloud Automation when adding NSX-V. Since it is not mandatory in version 1.8, this issue impacts upgrades from version 1.8 to version 1.9. Users must import the NSX-V certificate manually in the  TCA-CP environments where NSX-V managed, either before upgrading to version 1.9 or after upgrading to version 1.9.

    Workaround:

    1. Log in to the VMware Telco Cloud Automation web interface.
    2. Navigate to TCA-CP → Appliance Management → Administration → Certificate → Trusted CA Certificate.
    3. Import the NSX-V certificate by providing the URL.
  • You cannot apply static routes to certain subnets.

    You cannot apply static routes to the subnets 172.17.0.0/16 and 198.18.0.0/24 on VMware Telco Cloud Automation Manager and VMware Telco Cloud Automation Control Plane appliances.

  • Clicking the Activate button for a non-activated appliance opens a blank page.

    Clicking the Activate button on the banner for VMware Telco Cloud Automation appliances that have not been activated during the initial setup opens a blank page.

    Workaround: Go to Configuration > Licensing and activate the appliance.

Harbor Registration
  • Multiple Harbor Support

    VMware Telco Cloud Automation does not support registering multiple Harbor systems with a single VIM.

Infrastructure Automation
  • vRealize Log Insight (vRLI) is not shown as overridden when a GET call is made

    When you disable vRLI and make a GET call, vRLI is not shown as disabled but shows the default settings.

  • DNS records for all components are mandatory for configurations and deployments to work correctly.

    If you do not create DNS records when deploying certain appliances or components, then those configurations are not applied.

    Workaround: Ensure that you create DNS records for all appliances or components, even if you do not deploy them.

  • Multiple Distributed Virtual Switches (DVS) are not supported while connecting cell site Hosts.

    Currently, there is no selection for DVS for networks.

  • When TCA-CP is built using the light-weight installer, the deployment may hang or fail after a while.

    The deployment times out or fails after 8 hours.

    Workaround:

    Option 1: Restart the VMware Telco Cloud Automation Manager appliance after TCA-CP is installed.

    Option 2: It is recommended to use the full OVA file for deployments.

Cluster Automation (CaaS Infrastructure)
  • NIC ordering issues with the Network Functions that are onboarded in VMware Telco Cloud Automation 1.7 or earlier.

    On Network Functions with node pools having multiple VMXNET networks that were onboarded in VMware Telco Cloud Automation version 1.7 or earlier, there are some NIC ordering issues.

    Workaround:

    1. Terminate or delete the Network Function.
    2. Delete the node pool and re-create it with the network labels.
    3. Instantiate the Network Function again.
  • Unable to instantiate a Network Function with non-availability of SRIOV PFs even though SRIOV PFs are available in the host.

    Root Cause: Cluster is created from VMware Telco Cloud Automation. The user has enabled SRIOV on the host(s) after Kubernetes cluster creation.

    Workaround: Run the following API on VMware Telco Cloud Automation Manager and re-instantiate the Network Function.

    PUT: /hybridity/api/infra/k8s/clusters/<workloadclusterId>/esxinfo
      {
      }

  • If you are working with AMD-based hardware and want to configure VFIO-PCI drivers through Node Customization, change the VM hardware version by running the following Global Settings API on VMware Telco Cloud Automation Manager.

    This is a global setting and will apply to all deployments and customizations going forward.

    PUT: https://<TCA-Manager-IP>/admin/hybridity/api/global/settings/InfraAutomation/vfioPciHardwareVersion
    {
        "value": "18"

    }

  • Upgrading a cluster with the wrong VMware Tanzu Kubernetes Grid template fails and blocks any further upgrades.

    For clusters that have nodes with less than 30 GB of storage space, the upgrade fails if you use the VMware Tanzu Kubernetes Grid virtual machine template with a 30 GB storage capacity.

    Ensure that you use the 20 GB virtual machine template for upgrading. 

    VMware Telco Cloud Automation recommends users rename the VMware Tanzu Kubernetes Grid virtual machine templates.

  • Scale-out operation on the Master nodes fails.

    This issue occurs because the scale-out operation times out.

  • Upgrading a Workload cluster fails due to add-on failures.

    This issue occurs while applying add-ons due to nodeconfig-operator timeouts during Workload cluster upgrades.

    Workaround:

    1. SSH into TCA-CP.
    2. Get the cluster UUID by running the "kbsctl show workloadclusters" command.
    3. Run the following APIs by replacing CLUSTER_UUID with the UUID from above.
      1. curl -X GET "http://127.0.0.1:8888/api/v1/workloadcluster/CLUSTER_UUID/addon" > addon.json.
      2. Add the Harbor password to addon.json.
        "externalHarborPwd": "harbor_password"
      3. curl -X PUT "http://127.0.0.1:8888/api/v1/workloadcluster/CLUSTER_UUID/addon" -d "`cat cat addon.json`"
    4. Reset the cluster to failure by running the "kbsctl debug set-cluster-status -i CLUSTER_UUID --status Failure" command.
    5. Retry the cluster upgrade operation from the VMware Telco Cloud Automation UI.
VMware Integrated OpenStack
  • Availability Zones are not listed when creating compute profiles for VMware Integrated OpenStack (VIO)-based clouds

    When creating compute profiles for VIO environments, the Availability Zones are not listed.

    Workaround: Restart the Application Service on the corresponding VMware Telco Cloud Automation Control Plane (TCA-CP) appliance.

VMware Cloud Director
  • Alarms are generated only for the default resource pool that is mapped in the Organization VDC for a VNF.

    For Organization VDCs that are backed by multiple resource pools, VMware Telco Cloud Automation displays alarms for those VNFs that are deployed in the default or first resource pool only.

     

  • VNF instantiation on vCD environments fail with the error: 'Unable to create segment' or 'Import VM to VCD failed'.

    This happens when the Internal Network has DHCP enabled and has a long name.

    Workaround: Ensure that the name of the Internal Network (including instantiation prefix) is less than 9 characters.

  • VNF Grant fails for VNFs deployed in VMware Cloud Director environments which are backed by multiple vCenters

    Grant and Instantiation fails when the backing vCenter entities have the same morefs (managed object references) across different vCenters

CNF Lifecycle Management
  • Inventory does not show up for certain CNF instances.

    For certain CNF instances that have been deployed or upgraded, the Inventory view does not show any information.

    Workaround: Go to:

    1. Virtual Infrastructure and select the appropriate VIM.
    2. Perform a Force-Sync of the full inventory.
  • Workflow outputs are not displayed for CNF upgrades.

    If a CNF contains pre or post CNF upgrade workflows, the outputs for these workflows are not displayed within the CNF upgrade task.

  • A multi-chart CNF instantiation might fail for the second chart when the same namespace is used for all the Helm charts.

    This issue is noticed when instantiating a CNF that has multiple Helm charts.

    Workaround: Retry the instantiation.

  • CNF Instantiation does not work if ptp4l.conf is taken as User Input during instantiation.

    Instantiation fails for CNFs that require PTP customizations that pass the ptp4l.conf file as user input.

    Workaround:

    Edit the CNF catalog and embed the ptp4l.conf file.

    1. Download and extract the CNF catalog.
    2. Add the ptp4l.conf file under the Artifacts/scripts folder.
    3. Edit the NFD.yaml file:
      1. Change the descriptor_id to a unique UUID.
      2. Under infra_requirements > node_compnents > ptp4l:
        • Update the value of source from 'input' to 'file'.
        • Update the value of content from ‘PTP4L_CONFIG_FILE’ to ‘../Artifacts/scripts/ptp4l.conf’.
      3. Package and upload the files as a new CSAR.
      4. Create a new CNF instance using the new catalog.
Catalog Management
  • Editing a catalog and saving it as a new catalog can corrupt the catalog.

    The Save as New Catalog operation replaces all the strings within the descriptor that match the original network function name with the new catalog name.

User Interface
  • Some UI components do not load correctly after upgrading to VMware Telco Cloud Automation 1.9.

    You may notice that Kubernetes clusters are not marked for Upgrade, or the NS instantiation map view does not load correctly.

    Workaround: Force-refresh the UI after the upgrade.

    • Force refresh for Windows users: Ctrl + F5 or Ctrl + Shift + R.
    • Force refresh for Mac users: Cmd + Shift + R.
check-circle-line exclamation-circle-line close-line
Scroll to top icon