VMware Telco Cloud Automation 2.2 | 12 JAN 2023 | Build - VM-based: 21068621, HA-based: 21066842 | Release Code: R150 Check for additions and updates to these release notes. |
VMware Telco Cloud Automation 2.2 | 12 JAN 2023 | Build - VM-based: 21068621, HA-based: 21066842 | Release Code: R150 Check for additions and updates to these release notes. |
VMware Telco Cloud Automation version (TCA) 2.2 delivers critical fixes for version 2.2 and includes the following new capabilities:
Audit logs for all Workflow operations
You can generate audit logs for all the workflow operations including Design, Execute, and Delete.
Improved RBAC support for Workflows
New workflow permission types for Workflow Read and Workflow Execute
New filter types for workflows with additional filter attributes
Direct navigation to vRO UI
Takes the user directly to the Workflow execution run within vRO UI for easier debugging
Direct vRO launch function is available to the System Administrator users only
Workflows tab within NFs for tracking Workflow executions
A dedicated Workflows tab within Network Functions is available to track the workflow executions.
Placeholders for Global Inputs, Outputs, and Variables for Workflow executions
You can view the global inputs, outputs, and variables for workflow executions in the TCA portal.
Improved and automated log collection for workflow executions
The following features are available:
You can download logs directly from the TCA portal
Logs are generated for both workflow steps and the entire workflow execution
Debug or edit workflows during execution
You can modify workflows during execution and save the changes.
Pause / Resume / Retry / Abort Workflow Executions
You can pause and resume workflows across multiple steps. You can also abort workflow executions.
Improved Workflow Designer
Standalone workflows support attachments similar to NF/NS workflows.
Javascript-based workflows directly from TCA
You can execute the Javascript-based workflows directly from TCA.
New standalone Workflows UI
The following features are available:
Catalog and Execution support
Design and execute workflows from outside an NF or NS
Executions can be based on the NF, NS, or None context
For the None context, you can execute workflows directly against VIMs
RBAC restrictions apply for all workflow executions
Retention-based workflow executions
Delete Workflow executions
Upgrade older workflows to 3.0 schema-based workflows
You can upgrade the older workflows to 3.0 schema-based workflows.
New Schema Version 3.0
Improved validations and error messages when designing Workflows
Mandatory inBindings and outBindings cannot be deleted
New Workflow types
The 2.2 workflow types include the following:
No vRO dependency for new workflow types
You can directly execute kubectl or other shell commands from a pod within the K8S Cluster
Spawns a Pod in the K8S cluster to execute scripts or copy files
Executions based on TCA permissions across multiple contexts. The contexts include Read-Only or Read-Write contexts for Network Functions and VIMs
Execute workflows directly on the workload cluster. You can execute workflows through the Management clusters, if required.
Javascript-based workflow executions directly from TCA
NOOP (No Operation) - Take decisions natively without executing any workflow
You can use the AWS ECR as a repository for deploying the Helm charts.
VMware Telco Cloud Automation provides additional API parameters that helps you prioritize the CaaS and NF requests to increase or decrease the batch size.
The following add-ons are introduced:
Whereabouts
Enhanced Multus support
Cert-Manager
The autoscaler feature automatically controls the replica count on the node pool by increasing or decreasing the replica counts based on the workload.
Some of the TCA applications that support high-performance workloads require different network resources to support their functionality at different times.
When an application demands increased resources, TCA enables TKG to create new nodes that support the increased requirements of the application. TCA manages those new nodes accordingly and TKG terminates those additional nodes when they are no longer required. This autoscaling feature saves time and efficiently manages the CSP’s resources.
From VMware Telco Cloud Automation 2.2 onwards, CaaS Node Pool upgrades ensure that the older nodes are deleted after the new nodes are customized. Therefore, the Network Function pods can switch over seamlessly without a longer downtime.
You can backup and restore the TKG Workload Cluster using Velero
You can remediate the workload clusters that are restored by using the Remedy option available for CNFs
Certificate Manager is a Tanzu package that can now be installed as an extension for TKG clusters deployed through the Telco Cloud Automation UI. The certificates must be configured through the individual values.yaml
files for the CNFs.
Machine Health Check is an optional feature that ensures the up-time and health of your TKG Clusters. This is also available for Management Clusters.
Telco Cloud Automation 2.2 supports the high availability of applications by allowing the CSP to run Tanzu Kubernetes Grid nodes on a different host, diversifying the location of the application when a single physical host fails.
CSPs define the anti-affinity specifications for the nodes such as which nodes cannot run on the same hosts and enables the application to run in a distributed method on several physical hosts ensuring it continues to run even if one of the hosts fails.
TKG 1.6.1 uptake with 1.23-based Kubernetes Clusters:
Support for lifecycle management of TKG 1.6.1 clusters
TKG workload clusters with Kubernetes versions 1.21.14, 1.22.9, and 1.23.10
TKG management clusters with Kubernetes version 1.23.10
TCA 2.1 must be upgraded to TCA 2.2 to use TKG 1.6.1 followed by a TKG management cluster upgrade from 1.22.9 to 1.23.10
VMware Telco Cloud Automation 2.2 allows using Harbor 2.5.4 for all CNF-related deployments.
TKG Clusters provisioned through Telco Cloud Automation UI have a dedicated interface for NFS traffic (the secondary network name must be tkg-NFS). Therefore, the Management, workload, and Storage traffic will each have their own dedicated networks.
Support for Dual Stack environments
Upgrades for Airgap Server: Airgap server deployed for TCA 2.1 can now be upgraded to the TCA 2.2 compatible version
Multi-level restrictions for CNFs accessing Cluster Admin level privileges beyond their Name Space
Inherits and exposes Kubernetes security policies via TCA
Automated Policy creation by scanning Helm charts from Harbor
Apply, View and Grant policies for CNFs
Isolated and Permissive modes for CNF deployments
You can deploy and use TCA in a Dual Stack environment with interoperability of various IPv4 and IPv6 components.
As a security necessity and due to the shortage in IPv4 addresses, CSPs require IPv6 for VM and cloud-native use of Telco Cloud Automation. In other words, CSPs cannot use their network without additional IP addresses.
With Telco Cloud Automation 2.2, CSPs have Dual Stack support where you can have certain interfaces on IPv4 and migrate others to IPv6. This allows CSPs to deploy TCA 2.2 in an IPv6-only (IPv6 Dual Stack) network and at the same time manage IPv6 infrastructure, CaaS, and xNFs.
Selection/deselection of all Helm charts when reconfiguring CNFs
Customizable timeouts per LCM operation through the UI
Enabling/disabling Auto-Rollback on failure of CNF upgrades
Honor Helm chart dependencies during CNF termination
Improved CNF scale limits: Maximum of 50 Helm charts with 50 attachments per CNF
Shows latest values.yaml
in CNF Inventory
Automatically fetches values.yaml for a CNF
Provision to update values.yaml
for a CNF
Directly opens the console of the Pod from the TCA UI
Useful utilities are available within the Pod for easy debugging
Network Function catalog items can be individually enabled or disabled by using the TCA UI.
You cannot use the disabled Network Function CSARs for further instantiations unless they are enabled again.
VMware Telco Cloud Automation now supports restoring from backups that have been generated more than two days ago.
Day-0 UI for deploying HA-based TCA
Support for multiple TCA-CPs within one single CN-based TCA-CP cluster
Enable MHC for a TCA cluster
TCA 2.2 introduces the following infrastructure automation features:
Extended error code support
Custom uplink mapping and teaming policy
IPv6 and Dual Stack support
No dependency on CDC deployment for HCP
Update the real-time status at the host level when Host Config Profile is applied on TCA-CP
Support for domain resync when the domain status is in progress
Performance improvement
The ‘About’ screen in the primary UI lists the version of VMware Telco Cloud Automation along with the build number for quick reference.
From the Telco Cloud Automation 2.2 release, the Photon BYOI templates and RAN-optimized BYOI templates for TKG 1.6.1 are more secure by following the STIG security guidelines.
From Telco Cloud Automation 2.2, the minimum VM hardware version for VM-based TCA is upgraded to 15.
A minimum requirement of vSphere 6.7u3 is required to deploy the TCA OVA file.
TCA NF/NS workflow schemas 2.0 and 1.0 are deprecated
Harbor 1.x Partner Systems are no longer supported
Helm v2.x support will discontinue in the next release of TCA
Automated deployment of the Central site, Regional site, and Compute cluster in Infrastructure Automation will be deprecated in the next TCA release.
Command and Control (C&C) integration will be deprecated in the next release of TCA
The Harbor password section and the YAML file with user inputs are removed from the setup for security reasons. A Harbor password is requested during the setup, sync, and deployment phases of the airgap server and the user must enter the password when the script prompts for it.
multus: Do not delete multus addon after it is provisioned. If you delete it, you can neither create nor delete the pods on the workload cluster.
ako-operator: TCA 2.2 does not allow the deletion of ako-operator from the management cluster. However, the users can update Avi Controller credentials and certificates after the ako-operator is provisioned.
Download RAN Optimized BYOI Templates for VMware Tanzu Kubernetes Grid
To download RAN optimized BYOI templates, perform the following steps:
Go to the VMware Customer Connect site at https://customerconnect.vmware.com/.
From the top menu, select Products and Accounts > All Products.
On the All Downloads page, scroll down to VMware Telco Cloud Automation and click Download Product.
On the Download VMware Telco Cloud Automation page, ensure that the version selected is 2.2.
Click the Drivers & Tools tab.
Expand the category VMware Telco Cloud Automation RAN Optimized BYOI Template for TKG.
Corresponding to RAN Optimized Photon BYOI Templates for VMware Tanzu Kubernetes Grid 1.6.1, click Go To Downloads.
On the Download Product page, download the appropriate Photon BYOI template.
Download Photon BYOI Templates for VMware Tanzu Kubernetes Grid
To download Photon BYOI templates, perform the following steps:
Go to the VMware Customer Connect site at https://customerconnect.vmware.com/.
From the top menu, select Products and Accounts > All Products.
On the All Downloads page, scroll down to VMware Telco Cloud Automation and click Download Product.
On the Download VMware Telco Cloud Automation page, ensure that the version selected is 2.2.
Click the Drivers & Tools tab.
Expand the category VMware Telco Cloud Automation Photon BYOI Templates for TKG.
Corresponding to Photon BYOI Templates for VMware Tanzu Kubernetes Grid 1.6.1, click Go To Downloads.
In the Download Product page, download the appropriate Photon BYOI template.
Any CaaS cluster of version 1.20 must be upgraded to a higher version within TCA 2.2. Based on the Kubernetes and TKG deprecation policy, CaaS clusters with version 1.20 is no longer supported. This is applicable to v1 and v2 clusters.
Before upgrading to TCA 2.2, it is mandatory to upgrade all the Kubernetes clusters deployed as part of TCA 2.0.x in TCA 2.1.
TCA NF/NS workflow schemas 2.0 and 1.0 are deprecated.
Harbor 1.x Partner Systems are no longer supported.
Helm v2.x support will discontinue in the next release of TCA.
Automated deployment of the Central site, Regional site, and Compute cluster in Infrastructure Automation will be deprecated in the next TCA release.
Command and Control (C&C) integration will be deprecated in the next release of TCA.
Fixed Issue 3037896: Upgrade VM Hardware Version" is not applied on newly created node pools of v2 CaaS Workload clusters
The node customization option to upgrade VM hardware version to the latest is not applied on newly created node pools of v2 CaaS Workload clusters.
Fixed Issue 3033933: TCA upgrade fails after restoring the backup bundle
After restoring a backup bundle, the upgrade of HA-based TCA to a later build fails.
Fixed Issue 3023842: POST API request fails if the domain name already exists
When you send a POST API request to create a new domain, if the domain name already exists, the new domain replaces the older one. If the new and old domains are of different types or have different specifications, the new host provisioning under the affected domain fails.
Fixed Issue 3031896: Helm service fails to start on the TCA-CP appliance if there are orphan CNF entries in the TCA-CP database
Fixed Issues 3008136 and 2985174: Collect Tech-Support Bundle UI issue
After deleting a transformed cluster, the Collect Tech-Support bundle UI still lists the deleted cluster names.
Fixed Issue 3008135: After deleting a transformed cluster, creating a v1 cluster using the same name or IP address fails
Transform a workload cluster ABC with endpoint IP 10.10.10.10. After transforming the cluster, delete the workload cluster ABC using the v2 Delete API option. Now, when you create a cluster using v1 API with the name ABC or with endpoint IP 10.10.10.10, it fails.
Fixed Issue 3036626: Customizations on the Node Pool fail when the Node Pool is in maintenance mode
When a Node Pool is in maintenance mode, any further customizations on the Node Pool might fail.
Fixed Issue 3008023: Configuring SFTP server with a passphrase throws an error
When configuring the SFTP server with a passphrase for Backup and Restore, the following error is displayed:
"java.io.FileNotFoundException: /common/appliance-management/backup-restore/id_rsa_backup_restore.pub (No such file or directory)", "Failed to configure FTP server!"
Fixed Issues 3000826 and 2999956
After backing up and restoring from one VMware Telco Cloud Automation appliance to another, operations on the CaaS clusters fail.
Fixed Issue 3001077: kubectl command overwrites from the beginning of the line after few characters on the remote SSH terminal.
Fixed Issue 3039423: Unable to update kubeconfig
for Kubernetes Clusters through TCA Control Plane Appliance Management UI.
Issue 3067290: Infrastructure Automation does not support reverting stack from Dualstack to IPv6 or IPv4
You cannot revert a stack from Dualstack to IPv6 or IPv4. It causes unprecedented failures in ZTP functionality.
Issue 3061664: Distributed Virtual Switch with a management network must be mapped to use vmnic that has a vmk0 interface
When provisioning a cell site host in Infrastructure Automation, if a Distributed Virtual Switch (DVS) with the management network is not mapped to use the vmnic that has the vmk0 interface, the network migration of the host to that switch fails. However, the failure is not captured, and the host provisioning is marked as successful.
Workaround:
If a cell site group domain has multiple Distributed Virtual Switches, do the following:
Modify the configuration of the cell site group domain to map the management network switch to the vmnic that has the vmk0 VMKernel network interface attached.
Perform a full resync on the host.
After the configuration, the host is provisioned again, and the network migration is successfully completed from the host to the relevant switch.
Issue 2478240: The UI Heal > Recreate action fails if VDUs with SR-IOV are enabled in VIO (backend)
You cannot attach SR-IOV ports to the existing servers in VIO.
Workaround:
Instantiate a new VNF instead of healing it.
Issue 3075617: Conversion of CaaS v2 cluster to Stretch cluster fails
Editing an existing CaaS v2 Workload cluster to add a secondary TCA-CP endpoint fails.
Issue 3068843: Creating a v2 cluster with antrea does not progress in the processing state when using an NSX overlay network where EDP is enabled.
Antrea with NSX-T causes failure to create the v2 cluster node pool or provision the nodeconfig-operator
when using the NSX overlay network where Enhanced Data Path is enabled.
Workaround 1:
This is preferred workaround as it requires to be enabled only on the management cluster's control plane node. However, this workaround is not persistent if the TCA-kubecluster-operator pods are rebooted. The workaround requires to be re-applied if these pods are rebooted.
Update ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD
from false to true on the cluster operator.
SSH login into management cluster Control Plane node IP.
Access the Cluster Operator pod by running kubectl exec -it -n tca-system deploy/tca-kubecluster-operator -n tca-system bash
Run following command:
sed -i 's/ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD: false/ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD: true/g' /root/.config/tanzu/tkg/providers/config_default.yaml
Workaround 2:
If you do not apply Workaround 1, then consider this one. Apply this workaround only after you have created the workload cluster and it is in an error state.
Update the Antrea addon and set disableUdpTunnelOffload
to true.
Create a v2 workload cluster.
Click Add-Ons.
Choose the Antrea addon name antrea-tca-addon
and click Edit corresponding to the antrea addon antrea-tca-addon
.
Click Cancel on the pop-up page to configure No SNAT & Traffic Encap Mode
.
Click Edit corresponding to Antrea and then click Next.
Click Custom Resources (CRs), append disableUdpTunnelOffload: true
under spec/config/stringData/values.yamls/antrea/config
, and then click Deploy.
Sample:
metadata: resourceVersion: 1 name: antrea-tca-addon clusterName: {cluster_name}spec: name: antrea clusterRef: name: {cluster_name} namespace: {cluster_name} config: stringData: values.yaml: | antrea: config: disableUdpTunnelOffload: true
Issue 3070057: After upgrading to Telco Cloud Automation 2.2, the older Kubernetes versions are still displayed on the UI.
If TCA-M and the TCA-CPs registered under it are not on the same latest TCA version, then the UI still lists the older supported Kubernetes versions when creating the cluster template or upgrading the v1 clusters.
Workaround:
Upgrade the TCA-CPs registered on the TCA-M to the same latest TCA version.
Issue 3068951: Changing the autoscale limit does not change the replica count
Using autoscaler on a cluster does not automatically change its node group size. Therefore, changing the maximum size does not scale down the cluster size. Also, when a scale-down is in progress and you edit the maximum size of the cluster, an error occurs.
Issue 3064428: The cainjector and webhook pods of the cert-manager addon are stuck in CrashLoopBackOff status
The cainjector
and webhook
pods of the cert-manager addon are stuck in CrashLoopBackOff
status and the cert-manager addon status on the UI is provisioned as unhealthy.
Workaround:
Restart the CrashLoopBackOff pod with the following command:
kubectl delete pod -n cert-manager <crash-pod-name>
Issue 3064356: After the TCA appliances are upgraded successfully, no warning message is displayed on the UI to upgrade the add-ons for the management cluster
During the TCA appliance upgrade, if the update-supported CaaS versions job is called before completion of the update tbr
job on the TCA-CP VM, the addon versions are picked up from the old compatible TCA Bom Release (TBR), and therefore the update addon warning is not displayed for the management cluster.
This might occur only if TCA appliances are upgraded to a newer version of the TCA 2.2 version.
Workaround:
Restart the app-engine service on all the TCA-CP appliances.
Issue 15130: Second cluster creation stuck when the network of one workload endpoint IP unreachable in large scale environment
Second cluster creation takes a long time when the network of one workload with a large-scale node pool and endpoint IP is unreachable.
Issue 3094001: CNF upgrade fails to prepopulate the namespace field
During the CNF upgrade, the namespace field should be loaded with the default value with which the CNF is instantiated.
Workaround:
Enter the same namespace during the upgrade operation if the namespace is not loaded.
Issue 3074514: CNF update currently allows a lesser or equal number of VDUs or Helms in the target CSAR than the source CSAR
More VDUs or Helms in the target CSAR than the source CSAR while performing CNF update operation results in an error.
Issue 3040213: Deletion of the multus add-on prevents the creation and deletion of pods
Deleting the multus add-on prevents the creation and deletion of pods. The addons and CNFs cannot be installed or uninstalled after deleting the multus addon from the workload cluster.
Install the multus addon again.
Issue 3050896: vmconfig-operator pod does not work as expected due to "cgroups:cgroup deleted: unknown issue" error inside containerd
vmconfig-operator
may not work as expected due to the cgroups:cgroup deleted: unknown issue
error.
When the cgroups:cgroup deleted: unknown issue
error occurs in containerd
, vmconfig-operator
may not work even though the status of the pod is Running. For example, the TCA node pool creation or NF instantiation might fail with a timeout.
Workaround:
Delete the error node and recreate it.
Issue 3058278: Memory reservation displays '0' even though the SRIOV device is successfully added to vCenter
When you add the SRIOV device, vmconfig-operator
sends a request to vCenter to add the SRIOV device and reserve the memory for the VM. vCenter returns a success message, but the memory reservation shows '0'.
Workaround:
Do one of the following:
Log in to vCenter, edit the VM, and manually reserve all the memory.
Delete the existing node pool and create a new one.
Issue 3068164: Certificate generation is skipped in the deployment phase even if user-inputs.yml requests for generating the certificate
In the deployment phase, the certificate is not generated even when the user-inputs.yml
file requests for generating a certificate. The certificate is auto-generated only when the FQDN changes or the certs no longer exist in the root directory.
Workaround:
The certificates generated in the setup phase are stored in the {root-dir}/airgap/certs/
folder. In the deployment phase, if you need to generate new certificates, then apart from setting auto_generate: True
in user-inputs.yml
, you can either use a different FQDN in the setup phase or remove the {root-dir}/airgap/certs/
folder.
Issue 3068146: The Harbor credentials are not cleared when the airgap operations such as setup, sync, deploy fail with a wrong harbor password
The Harbor credentials are not cleared when the airgap operations such as Setup, Sync, and Deploy fail with a wrong harbor password and all other operations also fail.
Workaround:
Clear the Harbor credential file {root-dir/airgap/scripts/vars/harbor-credential.yaml} when the Setup, Sync, Deploy, and Import operations fail.
Issue 3067906: Airgap techsupport collection fails with insufficient space on /tmp
If the server is running for a longer period, the Airgap techsupport collection fails with insufficient space on /tmp
.
Workaround:
Edit the ansible playbook {root-dir}/airgap/scripts/playbooks/airgap-support.yml
.
Modify the locations in sections Create logs dir and Package support-bundle of the playbook.
Change the default location /tmp
to a different folder.
Issue 3067923: Concurrent upgrade of the v1 workload cluster and management cluster Kubernetes might affect the management cluster context and impact the upgrade result
Before upgrading the cluster, you must switch the management cluster context by using the Tanzu login command. If you concurrently upgrade the v1 workload cluster and management cluster Kubernetes versions, it might affect the management cluster context and impact the upgrade result.
It is recommended not to upgrade the management cluster and v1 workload cluster parallelly.
Issue 3064356: After the upgrade of TCA appliances, no warning message is displayed for updating addons on the management cluster
During the TCA appliance upgrade, if UpdateSupportedCaasVersions
is called before the completion of update tbr
on the TCA-CP VM, the addon versions are taken from the old-compatible tbr. Hence the update addon warning is not shown on the UI.
Workaround:
Restart the app engine on tca-cp.
Issue 3069022: Upgrading CN setup post Backup and Restore brings the system to destination system IPs
Workaround:
Edit the metallb-config
configmap with the correct IPs by running the following command:
kubectl edit configmaps metallb-config -n metallb-system
Restart metallb-controller
by running the following command:
kubectl rollout restart deployment metallb-controller -n metallb-system
Issue 3069018: Containers cannot attach to volumes when a deleted node is recreated by Machine Health Check
Workaround:
To delete the VolumeAttachment
objects associated with the deleted node tca-cluster-xxxxxx-wp-xxxxxx-xxxxx
, run the following command:
kubectl get volumeattachment |grep tca-cluster-xxxxxx-wp-xxxxxx-xxxxx | awk '{print $1}' | xargs -I {} kubectl patch volumeattachment {} -p '{"metadata":{"finalizers":[]}}' --type=merge
Issue 3077842: Unable to add host {hostname} to the inventory
In a Dual Stack-enabled environment when adding the Dual Stack host to the cell site group where the parent domain is configured with IPv4 only, the host addition fails and displays the error “Unable to add host {hostname} to the inventory. The name ‘{hostname}’ already exists.”
Workaround:
Remove the IPv6 FQDN entry of the host and add the host to the cell site group.
Issue 3063633: Calico CNI of the CaaS workload cluster is unhealthy when the multus interface is configured on day zero
When a workload cluster is deployed with a node pool having an additional interface connected to different networks other than the management network, Calico gets stuck in an unhealthy state.
Workaround:
Pause kapp reconciling calico.
[root@td-42 /home/admin]# wk patch pkgi/calico -n tkg-system -p '{"spec":{"paused":true}}' --type=mergepackageinstall.packaging.carvel.dev/calico patched[root@td-42 /home/admin]# wk get app -n tkg-system calicoNAME DESCRIPTION SINCE-DEPLOY AGEcalico Canceled/paused 6m50s 2d19h
Patch calico daemonset.
[root@td-42 /home/admin]# wk patch ds -n kube-system calico-node --type "json" -p '[{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"IP_AUTODETECTION_METHOD","value":"interface=eth0"}}]'daemonset.apps/calico-node patched
Wait until the DS restarts and then verify.
[root@td-42 /home/admin]# wk get ds -n kube-system calico-nodeNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEcalico-node 2 2 2 2 2 kubernetes.io/os=linux 2d19h
Issue 3057180: Unable to mount NFS on the secondary interface
On the Ipv6 CaaS workload cluster, you cannot mount NFS on the secondary interface because of the Photon OS. Photon OS requires a network that has RA+DHCP to install an IPv6 address on the interface. However, the secondary interface that has RA indicates that another default route is added to the IPv6 routing table, which breaks the management network connectivity.
systemd-networkd
version on Photon OS3 is older and does not contain the required parameters to reject the gateway route installation in the routing table.
Though the Eth0
management interface is the default gateway, when a secondary interface is added, another default gateway is installed in the IPv6 routing table which breaks all the management connectivity.
Issue 3065195: Resync fails on the provisioned host/domain in a Dual Stack-enabled environment
In a dualstack-enabled environment, the provisioned host or domain may fail to resync due to the following reasons:
Modifications or removal of DNS entries.
Modification of network interfaces such as disabling the Ipv4 or Ipv6 interface.
Issue 3069060: TCA appliance loses DHCP IPv6 IP when the lease expires and IPv4 static IP is inaccessible.
You can use static or DHCP for deploying Telco Cloud Automation with the IPv4 management interface. However, the IPv6 or dual stack mode should be with static IPs only. It is not recommended to activate DHCPv6.
Workaround:
Restart the network or reboot the appliance to restore the interface IPs and disable DHCPv6.
Issue 3070391: Enabling or disabling of NF/NS Catalog impacts only instantiation
Enabling or disabling of NF/NS Catalog impacts instantiations only. The Update, Upgrade, and Upgrade Package operations are not blocked.
Issue 3056462: No support for ECR registration through partner systems in an airgap environment in TCA 2.2.
Issue 3054392: The option to provide repo details during the instantiation of CNF is not supported for ECR.
Issue 3087794: RBAC filters for Workflow instances do not function correctly
Issue 3060372: Users need to click on Save Workflow to upload files while creating a new standalone workflow instance.
Issue 3059448: Running the script through VM tools Workflow in CNF and NS Workflow designer is applicable but not possible for CNF and network services.
Issue 3074508: Users can execute Workflows on NF/NS instance even if the LCM operation is in progress
Users can execute Workflows on NF/NS instance even when there is an ongoing LCM operation or Workflow running on that instance.
Workaround:
Track the Workflow executions and ensure that the LCM operations are completed before triggering new Workflows.
Issue 3074460: Any executions for 3.0 schema Workflows are not shown under the tasks of NF / NS instances
Workaround:
Access 3.0 schema runs through the dedicated Workflows tab within the NF/NS or through the new global Workflows tab.
Issue 3055138: Global tags are not applicable for Workflows.
Issue 3085524: Installation of load-balancer-and-ingress-service addon fails for v2 Workload clusters whose names are greater than 29 characters.
If the v2 Workload cluster names are greater than 29 characters, the installation of load-balancer-and-ingress-service addon fails.
Workaround:
Ensure that the length of the workload cluster name is lesser than 29 characters for installing the load-balancer-and-ingress-service addon.
Issue 3061122: TCA appliance doesn't support switching from Dualstack to ipv4/ipv6 only mode
When the TCA appliance is converted from Dualstack mode to ipv4/ipv6 only mode, the gateway Information from the wired.network
file is omitted. Due to this omission, the appliance UI becomes inaccessible.
Issue 3086288: Upgrading the Dualstack environment from TCA 2.2 to future releases results in issues when performing CaaS operations
Upgrading the Dualstack environment from TCA 2.2 to future releases results in issues when performing CaaS operations because the file /opt/third-party/environment-vars-config.sh
is not backed up during the upgrade process which results in missing exportIPv6_system=true
flag in the file.
Workaround:
SSH to TCA-CP.
Edit the /opt/third-party/environment-vars-config.sh file.
Add the line export IPv6_system=true
.
Save and restart the appliance.
Issue 1226141: Open Terminal does not work for users with permissions that are backed by vCenter-AD groups
Workaround:
Configure the username directly within TCA permissions.
Configure TCA to use AD as the authentication provider directly instead of using AD through vCenter
Issue 3096058: Open Terminal fails if TCA-M logged-in user has a different domain
Open Terminal fails if TCA-M logged-in user has a domain different from the one configured within TCA-M Appliance Management.
Workaround:
Use the same domain name for the primary user configured within TCA-M Appliance Management.
The primary user must have vCenter admin access to read users and groups.
Chroot image is based on LFS 11.1
GCC updated from 7.3.0 to 11.2.0
Glib updated from 2.24 to 2.35
Binutils updated from 2.28 to 2.38
Linux kernel updated from 4.4.184 to 5.19.2
Perl updated from 5.24.1 to 5.34.0
Systemd updated from 229 to 251