Updated on: 08 December 2021
|
What's in the Release Notes
The release notes cover the following topics:About VMware Telco Cloud Automation
VMware Telco Cloud Automation is a cloud orchestration solution that accelerates the time-to-market of modern network functions and services. It provides a simplified life-cycle management automation solution, across any network and any cloud.
For more information, see the VMware Telco Cloud Automation Blog.
Supported Releases
VMware Telco Cloud Automation 1.9
R146 | 6th April 2021 GA
Click here.
VMware Telco Cloud Automation 1.8
R145 | 12th January 2021 GA
VMware Telco Cloud Manager 1.8 | VMware Telco Cloud Automation Control Plane 1.8 | Build 17432894
New Features
- Product Interoperability
VMware Telco Cloud Automation adds support for the following:- VMware Tanzu Kubernetes Grid (TKG) 1.2
- VMware vSphere 7.0 U1
- VMware NSX-T Data Center 3.1
- VMware Cloud Director 10.2
- VMware Integrated OpenStack 7.0, 7.0.1
- VMware vRealize Orchestrator 8.2
- Kubernetes
- 1.19.1
- 1.18.8
- 1.17.11
- Infrastructure Automation
- Support to add multiple DNS and subdomain for Management appliances.
- Support to use custom naming schemes for Management appliance.
- Support to override DNS, NTP and Proxy at domain level so that each domain can be build using services in proximity.
- Support for deploying XLARGE Edge in NSX-T.
- Support for multiple DVSs with different physical uplinks and distribution of vmkernel networks across them.
- Ability to support up to 15 physical uplinks for each ESX host.
- Ability to create application networks with VLAN or overlay backing for core as well as cell sites.
- Ability to group multiple cell sites using Cell site groups and manage the common cell site configurations.
- Support for services like storage (VSAN and local datastore) and networking (NSX-T) on compute clusters.
- Ability to specify multiple VMware Tanzu Kubernetes Grid (TKG) and HAProxy images.
- Ability to delete un-provisioned hosts.
- Enhancements at backend tasks, to show more information for the task in progress.
- CaaS Infrastructure Lifecycle Management
- Improving the robustness of the Container as a Service (CaaS) infrastructure automation engine by enhancing the number of allowed parallel operations for CaaS lifecycle management operations.
- CaaS cluster upgrades support – VMware Tanzu Kubernetes Grid (TKG) CaaS clusters and node pools can now be upgraded by VMware Telco Cloud Automation.
- CaaS Infrastructure addon improvements:
- Support for overriding Storage Class Name
- vSphere datastore can be selected for vSphere-CSI plugins during cluster deployment
- Support for setting default Storage Class
- CaaS Infrastructure and node customization (late binding) improvements:
- VMconfig operator support
- VFIO-PCI driver binding
- VMX settings for HugePages
- Tuned settings can be embedded into CNF catalogs
- PCI passthrough devices (PTP)
- Intelligent node placement - Best effort VM/Host anti-affinity, CPU/Memory pinning with NUMA awareness, PCI Bridge and VMX settings
- Container Network Interface (CNI) update – Antrea:
- Replaced Calico with Antrea as the default CNI.
- Ability to install and configure Antrea as part of CaaS lifecycle management.
Note: Calico is still supported as a CNI for existing clusters. However, newly created Management clusters can use only Antrea.
- Role-Based Access Control (RBAC) for CaaS infrastructure automation:
- A new privilege for CaaS infrastructure design will allow authorizing certain users to manage CaaS infrastructure templates.
- Ability to skip Node Customizations during CNF instantiation.
- Upload / Download of payload request for cluster deployment.
VMware Telco Cloud Automation now allows users to upload the payload requests for cluster deployments which would auto-fill the entire wizard. Users can download the payload at the end of the wizard. Initially, the user should download the payload at the end of wizard. Users can then modify the same downloaded payload and upload it for subsequent cluster deployments. - Support for Harbor 2.x as a Partner System.
- VNF/CNF and NS Lifecycle Management
- Configuration management - Ansible integration:
Allows VMware Telco Cloud Automation to configure VNFs and CNFs (xNFs) and NSs using Ansible playbooks. Ansible playbooks can now be packaged within the xNF or NS CSAR package. VMware Telco Cloud Automation copies the playbooks to the destination server and runs them during the instantiation operation. This simplifies the onboarding process and reduces manual operations. - Pre-instantiation network creation:
VMware Telco Cloud Automation supports dynamic network creation during the VNF instantiation phase. Networks and network attachments in the network functions (connection points) are described and modeled in the TOSCA-based VNF descriptor. During the instantiation of the VNF, VMware Telco Cloud Automation creates the networks by calling the SDN controller (NSX) using custom workflows and assigns the created networks to the VNF components as per the model in the descriptor. - VNF/CNF lifecycle management improvements:
- Grant or Grantless mode support
- Auto-rollback option is now supported for CNFs
- Workflow enhancements
- VMware Telco Cloud Automation supports NETCONF actions such as get, get-config, merge, and replace.
- Ability to copy binary files from the CSAR package to a destination VNF or CNF (maximum file size limited to 20 MB).
- Support for custom timeout for each workflow step.
- Upload / Download of payload request for Network Service instantiation.
VMware Telco Cloud Automation now allows users to upload the payload requests for NS instantiation which would auto-fill the entire wizard. Users can download the payload at the end of the wizard. Initially, the user should download the payload at the end of wizard. Users can then modify the same downloaded payload and upload it for subsequent NS instantiations. - VNF instantiation - OVF properties are automatically added to deployed VDUs and VMs as specified in the VNF catalog (if they are not already present in the VM Template).
- Configuration management - Ansible integration:
- VMware Telco Cloud Automation Platform Robustness
- License management enhancements:
Improved the licensing mechanism of VMware Telco Cloud Automation to allow it to function in an environment where continuous internet access may not be available at all times or forbidden due to security regulations. However, internet connectivity is still required for initial license activation. - Online schedule backups now support hourly scheduling.
- Infrastructure-level high availability:
vSphere host level high-availability is a verified solution for increasing the resiliency of VMware Telco Cloud Automation and VMware Telco Cloud Automation Control Plane (TCA-CP) appliances. - Telco Cloud Automation Control Plane Tech Support Bundles now include deployed Kubernetes cluster log bundles.
- Ability to manually check for upgrades.
- Support to change the Appliance Management user interface certificate.
- Reduced the time required to start or restart the Application Service (app-engine) for both VMware Telco Cloud Automation and VMware Telco Cloud Automation Control Panel (TCA-CP). Also, reduced the time required for post HA failover / post upgrade readiness.
- License management enhancements:
Upgrade, Compatibility, and Upgrade Considerations
Important Notes
- In Kubernetes 1.19.x, the API version of
CustomResourceDefinition
is updated from v1beta1 to version 1. - Starting from VMware Telco Cloud Automation version 1.8, Whereabouts CNI is not supported for creating a Kubernetes cluster. You can deploy Whereabouts CNI as a CNF on the required cluster.
- After you upgrade VMware Telco Cloud Automation to version 1.8, the existing node pool on the existing Workload cluster can only support the Bootstrapper NodeProfile API to perform node customization operations. A new node pool supports the new NodePolicy API. To perform further operations on the node pool, you must upgrade the node pool. This converts the NodeProfile API to NodePolicy API.
API Updates
GET Cluster Status
The following operations do not change the status of the cluster:
- Edit Cluster Config
- Resize Nodes
- Add/Delete Node Pools
- CNF Customization
To get the status of the cluster, you must use the operation ID that is returned for each operation. For example, the API to fetch the status of a task is: https://TCA-IP/hybridity/api/infra/k8s/tasks
.
The GET
command on a Kubernetes cluster displays only Active
and Not Active
states. This is only a semantic change and not a schema change.
GET Node Pool Status
The following operations do not change the status of the node pool:
- Resize Nodes
- Edit Node Pool
- CNF Customization
To get the status of the node pool, you must use the operation ID that is returned for each operation. For example, the API to fetch the status of a task is: https://TCA-IP/hybridity/api/infra/k8s/tasks
.
Create Cluster
The following field is mandatory:
endpointIP
- The static IP address requirement for the Master nodes.
The following fields are not required:
lbTemplate
Create/Edit Template
The supported values for the following field have changed:
kubernetesVersion
Backup and Restore API Endpoint Suffix Update
The Backup and Restore API endpoint suffix is updated from .../hcx
to .../tca
.
For example, https://tca-ip:9443/backup/hcx
is now updated to https://tca-ip:9443/backup/tca
.
Infrastructure Automation API Changes
The API to create or edit domains is now updated to PUT /hybridity/api/ztp/scheme1/config
- You need not mention the
TCA
appliance in the appliances section of thecloud_spec.json
file as input. Any entry for the appliance of typeTCA
must be removed. - The images section of the
cloud_spec.json
file has changed. Thekube
images are now specified as a string array instead of a string. This allows you to upload multiple images for Kubernetes. Also, beginning from VMware Telco Cloud Automation version 1.8 and VMware Tanzu Kubernetes Grid version 1.2haproxy
images are not required. You can keep thehaproxy
field empty during a new deployment. - The fields
adminPassword
andvcfPassword
are not required for appliances of the typeCLOUD_BUILDER
,SDDC_MANAGER
, andVRO
in thecloud_spec.json
file. - You cannot add cell sites to a central or regional domain. To add a cell site, create a special domain of type
CELL_SITE_GROUP
. TheCELL_SITE_GROUP
domain can have a Central Site or Regional Site as the parent domain. The VC of the parent domain is used to add and manage the ESXi hosts added under theCELL_SITE_GROUP
domain. When adding a host to theCELL_SITE_GROUP
domain, the type of the host must be set toCELL_SITE_GROUP
. - To download a sample
cloud_spec.json
file, run theGET /hybridity/api/ztp/scheme1/template
API.
Static IP Address Requirement for Master Nodes
There is an additional requirement for a Static IP address when deploying VMware Tanzu Kubernetes Grid version 1.2 on VMware Telco Cloud Automation version 1.8 clusters. Every cluster requires one static IP to be assigned to it to act as the load balancer endpoint for the Master nodes of that cluster. The static IP address must belong to the same network subnet as the DHCP address of the Master nodes, but must not be a part of the DHCP pool.
For example, if the Master nodes are deployed in the Management network subnet 192.168.1.0/24, there must be a separate static IP pool and a separate DHCP pool. The static pool can be the first 30 IPs (192.168.1.1 - 192.168.1.30), and the DHCP pool can be the remaining set of IPs (192.168.1.31 - 192.168.1.250). During deployment, you must provide a load balancer/Master node endpoint IP from this static pool.
Static IP Address Requirement for Kubernetes Control Plane
A set of static virtual IP addresses must be available for all the clusters that you create, including both Management and Tanzu Kubernetes Grid clusters.
- Every cluster that you deploy to vSphere requires one static IP address for Kube-Vip to use for the API server endpoint. You specify this static IP address when you deploy a management cluster. Make sure that these IP addresses are not in the DHCP range but are in the same subnet as the DHCP range. Before you deploy management clusters to vSphere, make a DHCP reservation for Kube-Vip on your DHCP server. Use an auto-generated MAC Address when you make the DHCP reservation for Kube-Vip so that the DHCP server does not assign this IP to other machines.
- Each control plane node of every cluster that you deploy requires a static IP address. This includes both Management clusters and Tanzu Kubernetes Grid clusters. These static IP addresses are required in addition to the static IP address that you assign to Kube-Vip when you deploy a management cluster. To make the IP addresses that your DHCP server assigned to the control plane nodes static, you can configure a DHCP reservation for each control plane node in the cluster, after you deploy it. For instructions on how to configure DHCP reservations, see your DHCP server documentation.
For more information, see the VMware Tanzu Kubernetes Grid 1.2.0 documentation at: https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.2/vmware-tanzu-kubernetes-grid-12/GUID-mgmt-clusters-vsphere.html.
Additional DHCP IP Address Requirements During Kubernetes Cluster Upgrades
If you are upgrading any of the Kubernetes clusters that were deployed in VMware Telco Cloud Automation version 1.7, you must have additional free DHCP IP addresses available on the same segment. This is because VMware Tanzu Kubernetes Grid upgrades its nodes in a rolling manner by deploying new nodes and then copying the data from the old node to the new node.
If there is a five-node (Master + Worker) cluster that is consuming 5 DHCP addresses, the Kubernetes upgrade would require five additional DHCP IP Addresses for it to be successful.
VMware Telco Cloud Automation Public URL Updates
The base public URL for VMware Telco Cloud Automation is https://tca-ip-fqdn
. However, the URL path after FQDN has changed from /hybridity/ui/services-1.0/nfv-director/index.html
to /telco/ui/tca-manager/index.html
. But this does not have any impact if you are accessing the UI through https://tca-ip-fqdn
. If you are using the entire URL (including index.html
) to access the UI, ensure that you shorten it to access the UI only through IP or FQDN. For example, https://tca-ip-fqdn
.
tuned-rt
Under caas_components
in the CNF CSAR is Not Supported
VMware Telco Cloud Automation 1.8 enhances the way customers can specify and use the tuned customizations as part of the CNF CSAR. In version 1.7 (and prior), users were requested to provide tuned configurations manually and statically through the database as well as the NFD.yaml. Starting with version 1.8, users can specify the tuned configurations and profiles as a separate file and have those present as part of the CSAR itself. Specifying tuned-rt under caas_components is not required and not supported anymore. Any existing CNF CSARs that use tuned must be updated per the following steps to comply with VMware Telco Cloud Automation 1.8.
- Unzip the CSAR -
unzip <xyz>.csar
- Perform the following modifications:
- Add a new file called realtime-variables.conf under
./Artifacts/scripts
folder. Contents of the file must be:
isolated_cores=2-{{tca.node.vmNumCPUs}}
- Add the following
file_injection
section parallel toadditional_config
withinDefinitions/VNFD.yaml
(present at two places).
file_injection:
- source: file
content: ../Artifacts/scripts/realtime-variables.conf
path: /etc/tuned/realtime-variables.conf - In
Definitions/VNFD.yaml
, change thedescriptor_id:
to a new ID (present at two places). - In
Definitions/VNFD.yaml
, delete the following lines under thecaas_components
section (present at two places).
- name: tuned-rt
type: cni - Create the new CSAR by running the following command:
zip -r <new_name>.csar TOSCA-Metadata/ Definitions/ Artifacts/ VNFD.mf
- Add a new file called realtime-variables.conf under
VMware Telco Cloud Automation 1.7.1
R144a | 26th November 2020 GA
VMware Telco Cloud Manager 1.7.1 | VMware Telco Cloud Automation Control Plane 1.7.1 | Build 17227327
Fixed Issues
This release fixes the following issues:
- Rate limits imposed by docker.io. Users can now continue to deploy Kubernetes Clusters through VMware Telco Cloud Automation as well as instantiate CNFs on Kubernetes VIMs.
- High memory utilization issue with the VMware Telco Cloud Automation Control Plane (TCA-CP) node when deploying multiple CNFs.
VMware Telco Cloud Automation 1.7
R144 | 13th October 2020 GA
VMware Telco Cloud Manager 1.7.0 | VMware Telco Cloud Automation Control Plane 1.7.0 | Build 17005301
New Features
In this release, VMware Telco Cloud Automation 1.7 supports the following new features:
- Infrastructure Automation
VMware Telco Cloud Automation provides the ability to manage your telecommunication infrastructure through the cloud. Using VMware Telco Cloud Automation, you can now deploy your telco applications on the various central, core, and edge sites across the cloud.
- Kubernetes Cluster Automation
You can now perform the following operations with Kubernetes Cluster Automation:- Create node pools when configuring a Workload cluster.
- Resize Master and Worker nodes in the Workload cluster.
- Edit cluster configurations.
- Change passwords for cluster nodes.
- Configure Syslogs for Kubernetes clusters.
VMware Telco Cloud Automation allows automatic wiring of standard and SRIOV accelerated workloads only.
- VMware Telco Cloud Automation Control Plane
VMware HCX for Telco Cloud is now renamed to VMware Telco Cloud Automation Control Plane (TCA-CP) and the version numbers of TCA-CP and VMware Telco Cloud Manager are now aligned with the release version (1.7.0).
- Affinity and Anti-affinity Rules
You can now set affinity and anti-affinity rules on virtual machines to ensure that they do not share the same host.
- Workflow Referencing
When designing a network function, you can now use workflow referencing to get the default attribute values of each VDU from the JSON file that you upload.
- CNF Performance Monitoring
You can now monitor an instantiated CNF's performance metrics, collect the data for a specified interval, and take corrective actions.
VMware Telco Cloud Automation 1.5
R143a | 31st August 2020 GA
VMware Telco Cloud Manager 3.5.3, 16798841 | VMware HCX for Telco Cloud 3.5.3, 16798848
Updating the Activation Server
To migrate existing Telco appliances to the new Activation Server, you must upgrade to release R143a. After upgrading, update the Activation Server URL from the HCX Manager UI for the Telco appliances to communicate with the new Activation Server.
Perform the following steps:
- Log in to the HCX Manager UI
https://tca-ip-or-fqdn:9443
as an admin user. - Click the Configuration tab and click Licensing.
- Under Manage License Keys, click the Edit (pencil) icon against the Activation Server URL.
In the Update Activation Server window, the Activation Server URL is updated automatically tohttps://connect.tec.vmware.com
. - Ensure that your firewall and/or proxy allows connectivity from HCX Manager to
https://connect.tec.vmware.com
over TCP 443. - As an option, you can update the existing license key with a new one.
- Click Save.
The Activation Server URL is updated.
R143 | 18th August 2020 GA
VMware Telco Cloud Manager 3.5.3, Build 16760330 | VMware HCX for Telco Cloud 3.5.3, Build 16760287
New Features
In this release, VMware Telco Cloud Automation supports the following features:
- When designing a Network Function, you can now select the lifecycle management operations to be made available to your users. The list of lifecycle management operations differs for Virtual Network Functions and Cloud-Native Network Functions. All instances using the catalog lists only those operations that you have selected.
- You can now enable your users to run a workflow manually.
- You can now upload and run workflows that perform Heal operations on a Network Service.
- You can now optionally enter a prefix when instantiating a Network Function or a Network Service. This prefix helps in identifying the Network Function or Network Service instances in the system. All the tasks related to this Network Function and Network Service is prefixed with this text.
- You can now edit the source files in a catalog and save the catalog as a new version.
16th July 2020 GA
VMware Telco Cloud Manager 3.5.3, Build 16584306 | VMware HCX for Telco Cloud 3.5.3, Build 16584309
Product Support Notice
VMware Telco Cloud Automation 1.5 additionally supports the following software component versions from this release onwards:
Software Component | Version |
---|---|
VMware Cloud Director | 10.1.1 |
VMware NSX-T | 3.0 |
VMware Tanzu Kubernetes Grid | 1.18 |
VMware vSphere | 7.0 |
To view the complete list of software versions that VMware Telco Cloud Automation supports, see the Software Version Support and Interoperability section of the VMware HCX for Telco Deployment Guide.
New Features
In this release, VMware Telco Cloud Automation supports the following features:
CaaS Infrastructure (On tech preview)
- You can now define and customize Kubernetes templates for deploying Kubernetes Clusters.
- Kubernetes Clusters, or VMware Tanzu Kubernetes Grid Clusters, can be deployed on vSphere Clouds through VMware Telco Cloud Automation.
- When deployed on vSphere Clouds, Kubernetes Clusters are auto-registered as Virtual Infrastructure Managers (VIMs).
Harbor as a Partner System
VMware Telco Cloud Automation integrates with Harbor to auto-synchronize Docker images, Helm Charts, and other source repositories such as GitHub. This feature enables you to select repositories easily when instantiating a Cloud-Native Network Function (CNF).
CNF Enhancements
The following enhancements are made for this release:
- Customize a Kubernetes node by defining specific requirements within its descriptor. (On technology preview - Supported only on those clusters that are deployed through VMware Telco Cloud Automation.)
- Create a new catalog from an existing one.
- Set the order of deployment for each Helm Chart within a Network Function.
- Auto-population and simplified selection of Harbor repositories.
- Catalog upgrades.
- CNF instance updates.
- CNF instance upgrades.
- Alarms for CNFs.
Virtual Network Function (VNF) Enhancements
The following enhancements are made for this release:
- Create a new catalog from an existing one.
- Catalog upgrades.
- You can set the order of deployment for each VDU within a Network Function.
Network Service (NS) Enhancements
The following enhancements are made for this release:
- You can now set the order of deployment for each Network Function within a Network Service.
Support for Dark Theme
The VMware Telco Cloud Automation user interface now supports dark theme.
VMware Telco Cloud Automation 1.0
R140 | 20th May 2020 GA
VMware Telco Cloud Manager 3.5.3, Build 16212965 | VMware HCX for Telco Cloud 3.5.3, Build 16246360
New Features
In this release, VMware Telco Cloud Automation supports the following features:
- You can now scale a VNF deployment by selecting a pre-defined instantiation level during VNF instantiation.
- When instantiating a Network Service, you can now select pre-instantiated VNFs and CNFs.
- Your Network Service can now contain Nested Network Services within them. Simply drag and drop a Network Service in the Network Service Designer.
- You can now instantiate and configure Network Services that include pre-instantiated Nested Network Services.
R139 | 14th April 2020 GA
VMware Telco Cloud Manager 3.5.3, Build 16019559 | VMware HCX for Telco Cloud 3.5.3, Build 16019552
New Features
In this release, VMware Telco Cloud Automation supports the following features:
- Fault Management (Alarms) for Network Services
- Performance Management for a Virtual Network Function (VNF)
- The following task operations are added to a VNF instance:
- Start and Stop a VNF instance
- Forceful Stop and Graceful Stop a VNF instance
R138 | 2nd April 2020 GA
VMware Telco Cloud Manager 3.5.3, Build 15933381 | VMware HCX for Telco Cloud 3.5.3, Build 15933704
This is the first release of VMware Telco Cloud Automation.
Resolved Issues
The following issues were resolved in VMware Telco Cloud Automation 1.8.- Excessive logging of errors in the app.log of VMware Telco Cloud Automation – Control Plane appliance.
For the Harbor 1.x systems connected to VMware Telco Cloud Automation, excessive logs are recorded in the app.log file of the VMware Telco Cloud Automation – Control Plane appliance. The app.log record errors such as:
ERROR c.v.v.h.s.r.h.CollectHarborInventoryJob- Failed to fetch repos
If the repositories from the Harbor systems are synced, you can ignore this error messages.
- Configuration wizard hangs when setting up the VMware Cloud Director environment.
When setting up a VMware Cloud Director environment that is backed through NSX-T, the initial configuration wizard hangs at the VMware vCenter Server configuration page.
Workaround: Navigate to the Administration tab, import the VMware vCenter Server certificate manually, and continue.
- vRealize Orchestrator with self-signed certificate error.
Adding vRealize Orchestrator with a self-signed certificate fails with a certificate error.
Workaround: Navigate to the Administration tab, import the vRealize Orchestrator certificate manually, and then continue.
- User-specified Location and Switch configurations are not applicable to Cell sites.
While adding or editing a Cell site, VMware Telco Cloud Automation does not use the Location and Switch configuration that you provide. It creates and uses pre-defined configurations.
- Scale-out fails during SRIOV customization on new nodes.
Scaling out fails during SRIOV customization on a new node if the node pool is customized with a different kernel version than the default one.
- Technical support bundle does not capture logs.
The technical support bundle does not capture logs under
/common/logs/tkg
. - Heal Recreate does not work with VNFs which have affinity / anti-affinity policies defined
The Heal Recreate feature does not work with VNFs that have the affinity/anti-affinity policies defined.
- Heal Recreate does not consider VDU ordering.
VDU Ordering states the order to create the virtual machines within a VNF. VMware Telco Cloud Automation creates the virtual machines in the specified order during instantiation. However, when you perform a Heal Recreate, VMware Telco Cloud Automation creates all the virtual machines together.
Known Issues
The following known issues are for VMware Telco Cloud Automation 1.8.The known issues are grouped as follows.
- Appliance Management
- Infrastructure Automation
- Cluster Automation (CaaS Infrastructure)
- Technical Support
- Harbor Registration
- RBAC
- Catalog Management
- CNF Lifecycle Management
- VMware Integrated OpenStack
- You cannot apply static routes to certain subnets.
You cannot apply static routes to the subnets 172.17.0.0/16 and 198.18.0.0/24 on VMware Telco Cloud Automation Manager and VMware Telco Cloud Automation Control Plane appliances.
- Incorrect version displays during one of intermediate upgrade steps
During upgrade, the VMware Telco Cloud Automation Appliance Manager UI displays version as 4.8.0 instead of 1.8.0.
Steps to upgrade from 1.7.0 build <> to 4.8.0 build <>
However, this is a known issue and appears only in an intermediate step. After successful upgrade, the correct version 1.8.0 is displayed.
- DNS records for all components are mandatory for configurations and deployments to work correctly.
If you do not create DNS records when deploying certain appliances or components, then those configurations are not applied.
Workaround: Ensure that you create DNS records for all appliances or components, even if you do not deploy them.
- Multiple Distributed Virtual Switches (DVS) are not supported while connecting cell site Hosts.
Currently, there is no selection for DVS for networks.
- NSX overlay network created by Infrastructure Automation uses an auto-assigned Transport Zone.
While creating an overlay application network, the Transport Zone (TZ) is auto-assigned. It causes issues if a wrong TZ is picked.
Workaround: Create the NSX segment manually.
- 1.17.3/1.18.2 Kubernetes template clusters are required to change the Photon repository path on the cluster nodes.
Update the Photon repository path.
Workaround: Update
tdnf repo
of the cluster VM to packages.vmware.com. - Kubernetes Cluster node management IP address issue.
Kubernetes Cluster node management IP Address might not be visible in VMware Telco Cloud Automation after NodePool / Cluster upgrade (Networking data path is not impacted by this issue).
Workaround: This is a known issue of vSphere-cloud (vSphere Cloud Controller Pod). Delete the appropriate vSphere-cloud pod from the corresponding cluster.
- When deleting a node pool in "create failed" state, it fails with "Node Pool Deletion timed out for node".
Node Pool creation might timeout in TCA UI in certain cases. The task is not complete on vSphere, so deletion of the node pool might cause conflicts and result in errors.
Workaround: Delete the node pool again.
- There is a limitation to use a specific PCI passthrough device for PTP when multiple PCI Passthrough devices are enabled on the ESXi host.
When multiple PCI passthrough devices are enabled on the ESXi host, the user cannot use a dedicated one for PTP and remaining for the other purposes.
Workaround:
- Enable only one PCI passthrough on ESXi.
- Create a cluster or node pool on a resource pool consuming ESXi.
- Instantiate the NF that will consume the passthrough device.
- Enable other PCI passthrough devices on ESXi.
- NIC ordering issues with the Network Functions that are onboarded in VMware Telco Cloud Automation 1.7 or earlier.
On Network Functions with node pools having multiple VMXNET networks that were onboarded in VMware Telco Cloud Automation version 1.7 or earlier, there are some NIC ordering issues.
Workaround:
- Terminate or delete the Network Function.
- Delete the node pool and re-create it with the network labels.
- Instantiate the Network Function again.
- Unable to instantiate a Network Function with non-availability of SRIOV PFs even though SRIOV PFs are available in the host.
Root Cause: Cluster is created from VMware Telco Cloud Automation. The user has enabled SRIOV on the host(s) after Kubernetes cluster creation.
Workaround: Run the following API on VMware Telco Cloud Automation Manager and re-instantiate the Network Function.
PUT: /hybridity/api/infra/k8s/clusters/<workloadclusterId>/esxinfo
{
} - If you are working with AMD-based hardware and want to configure VFIO-PCI drivers through Node Customization, change the VM hardware version by running the following Global Settings API on VMware Telco Cloud Automation Manager.
This is a global setting and will apply to all deployments and customizations going forward.
PUT: https://<TCA-Manager-IP>/admin/hybridity/api/global/settings/InfraAutomation/vfioPciHardwareVersion
{
"value": "18"}
- Generating a tech support log bundle from a TCA-CP node takes more than 20 mins.
Tech Support bundle generation on TCA-CP takes more than 20 minutes when there are a lot of Kubernetes clusters deployed.
Workaround: Uncheck the option to collect Kubernetes Logs while generating the tech support bundle.
- Multiple harbors should be supported per VIM.
VMware Telco Cloud Automation does not support registering multiple Harbor systems with a single VIM.
- Harbor partner entry does not automatically associate with Kubernetes cluster if Harbor information is provided during cluster creation.
During CNF instantiation, the Harbor repository is listed in the drop-down menu even after harbor configuration is provided during the Kubernetes cluster creation.
Workaround:
- Go to Partner System > Select the Harbor system > Modify Registration
- Select the appropriate Kubernetes VIM from the VIM Association tab.
- Infrastructure LCM privilege user cannot delete a Kubernetes cluster without the Virtual Infrastructure Admin privilege.
A user with only Infrastructure LCM Privilege cannot delete a Kubernetes cluster deployed through VMware Telco Cloud Automation.
Workaround: Add the Virtual Infrastructure Admin privilege.
- Edit of Workload cluster fails if user does not have access to the corresponding Management cluster.
A VMware Tanzu Kubernetes Grid Admin user is not able to edit the Kubernetes cluster configuration when the advanced filter is applied to the Kubernetes cluster instance name.
Workaround: To manage a single Kubernetes Workload cluster as a VIM, a user must have at least Read-Only access to the corresponding Kubernetes Management cluster. For VIM Read-Only access on the Kubernetes Management cluster, the user can create new permission with the Virtual Infrastructure Auditor role and the Management cluster as the VIM filter.
- CNF Instantiation with Node Customization does not require Infrastructure LCM / Virtual Infrastructure Admin privileges.
While instantiating a CNF with node customization, a user with only VIM Consume privileges can perform this operation.
Ideally, Infrastructure LCM / Virtual Infrastructure Admin privileges are required.
- Designing a new catalog with source files from 1.7 fails.
VMware Telco Cloud Automation 1.8 has aligned with the SOL001 way of specifying the NFD descriptor file. In version 1.7 (and prior),
interfaces
,infra_requirements
, andproperties
were duplicated and were present at two places for the NF node. Starting 1.8, these are required to be specified only under thenode_templates
section. The updated Catalog Designer in 1.8 will only design catalogs in this manner.
This behavior is also occasionally seen when editing catalogs defined in 1.7 (or prior) and saving them as a new catalog.
Error observed:descriptor_id should be same in substitution_mappings and node_templates
Workaround:
There are two possible workarounds:
- Design a new CSAR from the beginning through the UI (drag and drop the components and so on.).
- Create a new CSAR outside of VMware Telco Cloud Automation by unzipping the CSAR, editing the source files, and re-zipping it as a new CSAR.
- CNF instantiation fails when CNF CSAR contains a tuned.conf file larger than 1 KB
While instantiating a CNF which has Node Customization enabled with tuned configurations, if the packaged tuned.conf file is greater than 1 KB in size, the instantiation fails.
Workaround:
One can increase the Java stack size allocation for the VMware Telco Cloud Automation - Manager appliance to resolve this issue.
- SSH to the TCA node using admin and then switch to root.
- Edit the file:
/etc/systemd/app-engine-start .
- Change the following line as: (modification is highlighted in bold).
JAVA_OPTS="-Xmx2048m -Xms2048m -XX:MaxPermSize=512m...
to
JAVA_OPTS="-Xss4m -Xmx2048m -Xms2048m -XX:MaxPermSize=512m...
- Restart the Application Service (app-engine) on the VMware Telco Cloud Automation Manager.
- Either run the command through root:
systemctl restart app-engine .
OR - Restart the service through Appliance Management.
- Log in to
https://TCA-ip-or-fqdn:9443/
(use admin credentials). - Navigate to Appliance Summary tab.
- Under Hybridity Services select STOP action against Application Service.
- Once the status updates to STOPPED, select START.
- Wait until the status changes to RUNNING.
- Log in to
- Either run the command through root:
- Instantiating a 1.7 CNF with tuned customizations fails with the error: 'Node Pool customization is blocked'
VMware Telco Cloud Automation 1.8 changes the way tuned requirements and customizations are specified in the CNF CSAR. (see the Upgrade Considerations section).
Workaround:
- Unzip the CSAR -
unzip <xyz>.csar
- Perform the following modifications:
- Add a new file called realtime-variables.conf under
./Artifacts/scripts
folder. Contents of the file must be:
isolated_cores=2-{{tca.node.vmNumCPUs}}
- Add the following
file_injection
section parallel toadditional_config
withinDefinitions/VNFD.yaml
(present at two places).
file_injection:
- source: file
content: ../Artifacts/scripts/realtime-variables.conf
path: /etc/tuned/realtime-variables.conf - In
Definitions/VNFD.yaml
, change thedescriptor_id:
to a new ID (present at two places). - In
Definitions/VNFD.yaml
, delete the following lines under thecaas_components
section (present at two places).
- name: tuned-rt
type: cni - Create the new CSAR by running the following command:
zip -r <new_name>.csar TOSCA-Metadata/ Definitions/ Artifacts/ VNFD.mf
- Add a new file called realtime-variables.conf under
- Unzip the CSAR -
- Availability Zones are not listed when creating compute profiles for VMware Integrated OpenStack (VIO)-based clouds
When creating compute profiles for VIO environments, the Availability Zones are not listed.
Workaround: Restart the Application Service on the corresponding VMware Telco Cloud Automation Control Plane (TCA-CP) appliance.