This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

Updated on: 10 JUNE 2021

VMware Telco Cloud Automation | 10 JUNE 2021 | R146 | Build 18002366

 

What's in the Release Notes

The release notes cover the following topics:

What's New

This is a maintenance release. It fixes some critical bugs and adds support for NSX-T 3.0.3. All the other features remain the same. For details, click here.


 

Resolved Issues

  • Upgrade workload cluster fails due to the node config-operator times out.

    When you try to upgrade the workload cluster in VMware Telco Cloud Automation (TCA) 1.9, the upgrade of the workload cluster fails as the node config-operator times out.

    1. SSH into TCA-CP
    2. Get cluster UUID by
      kbsctl show workloadclusters

      Replace the UUID in the step3 and step4 cluster UUID in URL

    3. Retry PUT Addon again to make addon successfully
      1. curl -X GET "http://127.0.0.1:8888/api/v1/workloadcluster/415fd0b0-9eda-4a13-a3b7-4f9948e9ea4f/addon" > addon.json
      2. Add harbor password to addon.json, if needed configure harbor "externalHarborPwd": "harbor_password",
      3. curl -X PUT "http://127.0.0.1:8888/api/v1/workloadcluster/415fd0b0-9eda-4a13-a3b7-4f9948e9ea4f/addon" -d "`cat cat addon.json`"
    4. Reset cluster to failure by
      kbsctl debug set-cluster-status -i 415fd0b0-9eda-4a13-a3b7-4f9948e9ea4f --status Failure
    5. Retry upgrade from VMware Telco Cloud Automation user interface
  • In Kubernetes version is 1.20 and above, unable to create PV(persistent volume) using NFS-client.

    In Kubernetes version is 1.20 and above, unable to create PV(persistent volume) using NFS-client.

    Upstream issue: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/25

    1. Edit apiserver configuration:
      sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
    2. Insert variable `RemoveSelfLink` and set value as false
      spec:
      
      containers:
      
      - command:
      
      - kube-apiserver
      
      - --feature-gates=RemoveSelfLink=false # <--- Insert this line
    3. Reload API server configuration:
      sudo chmod 755 /etc/kubernetes/manifests/kube-apiserver.yaml
      kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
      kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
  • Tanzu Kubernetes Grid (TKG) is not powering on the VMs automatically, hence VMware Telco Cloud Automation reconfigurations are failing

    After all master nodes of workload cluster were power off or migrated, there is a chance that nodeconfig operator pod in "nodeAffinity" state, which will cause node pool customization error with reason:

    "failed calling webhook \"validator.nodeconfig.acm.vmware.com\": 
    Post \"https://nodeconfigvalidator.tca-system.svc:443/validate-nodeconfig?timeout=5s\": 
    dial tcp 100.70.8.55:443: connect: connection refused".
    1. SSH into TCA-CP
    2. Use CCLI to switch kubeconfig content to target workload cluster
    3. To list and find the nodeconfig-operator pods, use the command:

      kubectl get pods -n tca-system

      You will find 2 or more pods for nodeconfig-operator, one of the nodeconfig-operator pod is in "nodeAffinity" state.

    4. To delete all nodeconfig-operator pods, use the following command: 
      kubebctl

      Wait for Kubernetes to recreate nodeconfig-operator pods.

    5. Retry node customization through VMware Telco Cloud Automation user interface
  • VNF Grant fails for VNFs deployed in VMware Cloud Director environments which are backed by multiple vCenters.

    Grant and Instantiation fails when the backing vCenter entities have the same morefs (managed object references) across different vCenters.

  • Physical Network Adapter ordering is not honored when there are more than 10 Physical Network Adapters.

    VMware Telco Cloud Automation 1.9 does not honor the indices of the Physical Network Adapters when there are more than 10 Physical Network Adapters available on a VMware ESXi.

    Manually re-configure DVS uplink using VMware vCenter Server.

  • Additional SSD disks are marked as HDD and claimed as Capacity Tier in vSAN.

    VMware Telco Cloud Automation 1.9 marks all additional SSD-based disks as HDDs and assigns them as Capacity Tier automatically.

    Re-configure vSAN separately using VMware vCenter to fix the capacity, cache, and the unclaimed disks as required.

Known Issues

Apart from the resolved issues, the list of known issues in the current version remains the same as the previous version. For details, click here.
check-circle-line exclamation-circle-line close-line
Scroll to top icon