VMware Telco Cloud Automation 3.2 | 30 Sep 2024 | VMware Telco Cloud Automation: 24287525 | VMware Telco Cloud Automation (OVA): 24287523 | Airgap: 24287597 |
VMware Telco Cloud Automation 3.2 | 30 Sep 2024 | VMware Telco Cloud Automation: 24287525 | VMware Telco Cloud Automation (OVA): 24287523 | Airgap: 24287597 |
This section lists down all the new features added in VMware Telco Cloud Automation 3.2.
VMware Telco Cloud Automation 3.2 enhances / adds interoperability support for the following:
Product |
Supported Versions |
---|---|
VMware vCenter Server |
8.0u3 |
VMware vSphere |
8.0u3 |
VMware NSX-T |
4.2 |
VMware Tanzu Kubernetes Grid |
2.1.1, 2.5.2 Kubernetes 1.24.10, 1.26.14, 1.27.15, 1.28.11, 1.29.6, 1.30.2 |
VMware Cloud Director |
10.6 |
VMware Aria Automation Orchestrator |
8.18 |
VMware Aria Operations |
8.18 |
VMware Aria Operations for Logs |
8.18 |
Avi Load Balancer |
30.2.1 |
Avi Kubernetes Operator |
1.12.2 |
Harbor |
2.6.3, 2.7.4, 2.8.4, 2.9.1, 2.10.2
Note:
Harbor 2.8 onward supports OCI based charts and images only. |
VMware ESXi - Additional Patch versions of VMware ESXi are supported. Example: 8.0u3b.
VMware vCenter Server - Additional Patch versions of VMware vCenter Server are supported. Example: 8.0u3b.
VMware NSX - Additional Patch versions of VMware NSX are supported. Example: 4.2.0.2.
IaaS Automation for ESXi Upgrade Workflow Hub can be used by users to execute a Workflow that upgrades the ESXi hypervisor as part of TCP Infrastructure upgrade.
Workflow Hub can be used to get a Pre-Assessment report on the status of vCenter Cluster / ESXi Host(s) before triggering the upgrade.
Workflow Hub can be used to generate an Overall Summary of the success and failure of upgrades.
Scalable fan-out architecture to support multiple flavors of TCP deployments.
Supports Harbor version 2.10.2.
VMware Telco Cloud Automation 3.2 introduces automatic monitoring of Active Directory Endpoints.
Alarms for Broken Connectivity, Untrusted Endpoints, Expired, and About-to-Expire Endpoints.
Automatically monitor certificate status for Active Directory in case of TCA-M authentication is configured with AD
Raise an alarm in TCA-M when an issue with endpoint certificate is detected that can disrupt TCA functionality:
When connectivity is broken
When a certificate is expiring soon
When a certificate expired
When a certificate is untrusted
Certificate Observability, when detects that a vCenter certificate has been modified, displays a Modified alert for that vCenter.
Enhanced Skip Level Migration and Upgrade VMware Telco Cloud Automation 3.2 supports direct upgrade / migration from VMware Telco Cloud Automation 2.3 appliances without the need of going to version 3.0 or 3.1.
Enhanced Multi TKG Support
Single release to fully manage the lifecycle of multiple Kubernetes Versions from 1.24 to 1.30.
Supports 6 Kubernetes versions (1.24.10, 1.26.14, 1.27.15, 1.28.11, 1.29.6 and 1.30.2).
Supports upgrading clusters from v1.24.10 to v1.30.2.
Supports upgrading v1.26.14, v1.27.11 and v1.28.7 clusters from TCA 3.1 to versions supported in TCA 3.2.
Updated dashboard view for workload clusters
Clear and informative error messages for CaaS operational issues.
Introduced centralized management and visibility for Cluster Upgrades
Ability to cancel the creation of Management Clusters
Visibility for the Certificate and health status of Cluster-associated endpoints
Ability to raise Alerts on Certificate status for various Cluster-associated endpoints
Cluster Fallout UX improvements
Helps Platform Administrators and users to take faster recovery actions with suggested troubleshooting hints on the portal.
Rehoming of K8s Cluster for Classy Clusters
Rehoming (Move) one Workload Cluster from one Management Cluster to another of the same version. Updated to include support for Classy (Cluster Class based) Clusters in this release.
DHCP IP Release for Standard Kubernetes Cluster Upgrade
Reduces size of IP Pool required for CaaS LCM in larger Cluster deployments (for K8s 1.27 and above).
Support additional multi-interface configurations on Kubernetes Worker Node Interfaces
Multi-interface configurations such as VLAN Sub Interfaces, GRO, GSO, IPv4ProxyArp and IPv6ProxyArp can now be added via DIP.
NodePolicy view under Workload Clusters for viewing DIP enforced policies
Ensure that you are using the latest ovftool version to upload the templates.
To download the Photon BYOI templates:
Browse to support.broadcom.com.
Login using your Broadcom credentials.
Ensure that you select the Software Defined Edge group from the drop-down on the top right.
Browse to My Downloads > VMware Telco Cloud Automation.
Expand the VMware Telco Cloud Automation row by clicking on it.
Select the appropriate release number.
Select the Drivers & Tools tab.
Read through and agree to the Broadcom Terms and Conditions.
Find the BYOI template and click on the download icon.
Download the latest BYOI template OVAs that your management and workload clusters run on from the OS and Kubernetes version line.
To download RAN optimized BYOI templates:
Browse to support.broadcom.com.
Login using your Broadcom credentials.
Ensure that you select the Software Defined Edge group from the drop-down on the top right.
Browse to My Downloads > VMware Telco Cloud Automation.
Expand the VMware Telco Cloud Automation row by clicking on it.
Select the appropriate release number.
Select the Drivers & Tools tab.
Read through and agree to the Broadcom Terms and Conditions.
Download the OVA for RAN optimized Photon BYOI Template for Kubernetes version 1.30.2.
The following are the list of integrations / features which are marked as deprecated in VMware Telco Cloud Automation 3.2:
TCA NF/NS workflow schemas 2.0 and 1.0
Kubernetes v1.24.10
DPDK igb_uio and other out-of-kernel tree drivers (all versions)
vfio-pci will continue to be supported.
chartmuseum charts
helm3 binary within the BYOI templates
Photon3 OS for TKG Workload Clusters
VIM / Endpoints
VMware vCenter 7.0u3 and VMware vSphere 7.0u3.
VMware NSX or VMware NSX-T (3.x, 4.0.x, 4.1.x).
VMware NSX-v Manager (all versions).
VMware Integrated Openstack (all versions).
The following are the list of features which have been permanently discontinue starting VMware Telco Cloud Automation 3.2:
Native integration with AKO and AKOO (Avi Kubernetes Operator and AKO Operator).
TKG Single Node Cluster.
Issue 3362102: [Photon][ipv6-proxy]nodeconfig failed due to cannot sync with photon5 repo inside proxy based cluster.
Issue 3284154: Management cluster creation failing with error "unable to install kapp-controller to bootstrap cluster".
Issue 3362048: PVC based on nfs-client is stuck at pending in upgraded env from 2.3 to 3.1.
Issue 3393264: Cluster Upgrade wizard → Include Node Pool toggle button gets reset after individually selecting Templates.
Issue 3393278: Edit dualstack workload cluster shows IP family IPv4 which blocks 1.28.4 to 1.28.7 upgrade.
Issue 3360361: Photon 5 do not identify the 'noproxy' string if space separated list is given.
Issue 3354663: The Fluent-bit add-on is in a crash loopback state on a Dual-stack & IPv6 workload cluster with TKG version greater than or equal to 2.4.0.
Issue 3389197: If Management cluster 1.24 upgrade is stuck due to missing tkg template on VC, it takes ~4 hours to timeout and then User can Retry.
Issue 3361594: [cert-obs]: connected endpoints - upon successful upgrade from tca 3.0 to 3.1, not seeing the observability for vRLI.
Issue 3389081: After migrating from 2.3 to 3.1.1 , the existing Airgap server and Harbor shows Disconnected under Connected Endpoints tab on TCA Manager.
Issue 3367899: If compute cluster domain(s) exist in TCA 2.3.x Infrastructure Automation, migration to TCA 3.0/3.1 will not be supported.
Issue 3385619: User is able to delete VIM even when a Git Configuration / Git based CNF is associated with it.
Issue 3383531: No validations are performed when registering a Git repo to Partner systems for the URL, username and password, or token. No error is thrown in case of invalid inputs.
Issue 3387959: No support for logging GitOps related events except for Partner system registration in the Audit Logs.
Issue 3366953: For some workflows that are not created via the WFH drag and drop UI, the edit page could show wrong data.
Issue 3429619: When change the legacy cluster variables controlPlanePowerOffMode/nodePoolPowerOffMode, the ControlPlane/NodePool status shown in the TCA UI are not consistent with backend status. TCA UI might show the status Provisioned however the backend is still Provisioning.
Workaround:
For the ControlPlane real status, check the content of Configuration and Control Plane – Conditions – "VIEW MORE DETAILS" – TcaKubeControlPlane resource status
For the NodePool real status, check the content of Node Pools – "VIEW MORE DETAILS" – TcaNodePool resource status
Issue 3422496: Upgrade a nodepool with NF instantiated to TCA 3.2, after upgrade mc, mc addon, wc addon, edit nodepool <add labels, scale out/in> any operation do not related secondary interfaces change. Navigate to cluster/policies tab, check there will have a nodepoolname>-policy-nf created and it will stuck at provisoining with a reason "NotReadyForReconcile". That's due to there is no customziation change, hold the reconcile.
Workaround:
When customization change on the nodepool, it will automatically reconcile to Normal.
1. Do NF Upgrade or NF reinstantiation.
2. Or, do secondary interfaces change (eg: add/remove/update secondary interfaces either from nodepoool/devices part or choice a policy profile).
Issue 3436624: Click 'Upgrade Cluster' on workload cluster or failed management cluster in CaaS Infrastructure UI. CaaS Infrastructure page doesn't show the related task even after clicking refresh button.
Workaround:
User can switch to cluster detail page to check the condition if necessary.
Issue 3437898: Management Cluster Upgrade from 1.26.8 to 1.27.5 fails.
Upgrade management cluster from v1.26.8 to v1.27.5 failed: failed to upgrade management cluster providers with error:failed to upgrade management cluster providers: failed to apply providers upgrade: failed get cert manager components: failed to list api resources: action failed after 9 attempts: unable to retrieve the complete list of server APIs: autoscaling/v2beta2: the server could not find the requested resource, flowcontrol.apiserver.k8s.io/v1beta1: the server could not find the requested resource. You can retry to upgrade the cluster.
Workaround:
1. SSH to TCA-CP using admin credentials.
2. Exec to kbs-tkg241 pod using command
kubectl exec -it -n tca-cp-cn kbs-tkg241-xxx -- bash
3. Switch to MC k8s context using:
kubectl config use-context managementcluster-context-name
4. Delete apiservices : "flowcontrol.apiserver.k8s.io/v1beta1" and "autoscaling/v2beta2" using:
kubectl delete apiservices autoscaling/v2beta2
kubectl delete apiservices flowcontrol.apiserver.k8s.io/v1beta1
5. Retry MC upgrade from TCA UI.
Issue 3435207: Releasing DHCP IP for standard workload cluster for K8s 1.27 and above may not work when DHCP server IP address belongs to the same subnet of K8s cluster node VMs. E.g. K8s cluster deployed on NSX segment with Segment DHCP Server. This is because the MAC address of the DHCP server is required to send DHCP release request under this condition, but ARP cache expires in 60 seconds.
Workaround:
Set DHCP lease time or renewal time to less than 60 seconds. This will always keep the MAC address of the DHCP server in the ARP cache.
Note this will increase the DHCP traffic significantly and should be carried out with caution.
Issue 3437829: Some of the nodepool policy conditions are stuck in Provisioning after a user edits a nodepool to put it into maintenance mode. The state will remain in Provisioning until the nodepool exits Maintenance Mode.
Workaround:
None.
Issue 3433741: Upgrade of 1.24 workload cluster gets stuck with UpgradePlanGetTBRError, if one of the nodepool have old TBR. Once the mgmt cluster has been upgraded, there is no way to edit the nodepool to correct tbr.
Workaround:
Delete the nodepool with old TBR and upgrade the workload cluster. Post upgrade re-create the nodepool.
Issue 3430454: If a NIC is added to a IPv4 Workload cluster via Node Policy, then IPv4 Address is not assigned via DHCP.
Workaround:
In TCA 3.2.0 we also support a rawconfig to enable interface dhcp which does not need interface name start with "tkg", the rawconfig to enable dhcp should like following:
- type: networkd.network
config:
- Network.DHCP=yes
- Network.IPv6AcceptRA=no
Issue 3389196: Retry of management cluster upgrade from 1.24 to 1.25 failed: "Unable to upgrade k8s cluster with error forbid to operate because the status of cluster is Upgrading".
Workaround:
Restart KBS POD to terminate running tanzu cli and Retry cluster upgrade
1. login to TCA CP with admin account via ssh or console.
2. run "kubectl rollout restart deploy kbs-tkg220 -n tca-cp-cn".
3. Retry the cluster upgrade from TCA.
Issue 3436214: VM IP is missing after node customization. On checking in vCenter, the web console shows "watchdog: BUG: sofy lockup -CPU#* stuck".
Workaround:
Power off then power on node pool in vCenter server. VM should recover.
Issue 3277747: Create/Update vsphere-csi-addon with multi-storage classes case failed, the error message in addon status is as follows:
capv@e2e-mc-fgvkw-tqtrf [ ~ ]$ kubectl get tka -n e2e-wc3 vsphere-csi -o yaml
status:
conditions:
- lastTransitionTime: "2023-09-05T17:29:53Z"
message: 'Failed to validate control plane e2e-wc3-cp vsphere multi-zone tags:
Failed to find a tag attched to datacenter dual-dc belongs to region tag category
k8s-region: get attached tags Datacenter:datacenter-1001: POST https://dual-vcprime.ipv6.com/rest/com/vmware/cis/tagging/tag-association?~action=list-attached-tags:
401Unauthorized'
reason: AddonInputValidationFailed
severity: Error
status: "False"
type: ValidationReady
errorCode: K8S120402
phase: Configuring
Workaround:
Login the Control Plane node Of management cluster and restart the TKO pod.
# kubectl delete pod tca-kubecluster-operator-xxxx-xxxx -n tca-system
Or, wait for half an hour before trying again (this is the default expiration time of the session for vSphere rest client).
Issue 3435684: NF instantiation failed, check nodepolicy see the error.
[{Plugin kernelType is in Failed stage, reason: Error(1525) : rpm transaction failed, lastError: error running command: exit status 245}]
Workaround:
Login to target worker node then reinstall current active kernel version.
# uname -rsm
# tdnf reinstall --refresh -y linux=<current active kernel version>
For example, commandline "uname -rsm" return "Linux linux=6.1.75-1.ph5 x86_64" then do "tdnf reinstall --refresh -y linux=6.1.75-1.ph5".
Issue 3432172: The calico-node pod on newly created Node Pool node may stuck with "Init:RunContainer" Error forever.
This is due to a race condition between calico-node pod and nodeconfig-daemon pod which will restart containerd service.
Workaround:
Take ssh access to the Node Pool node which has the issue.
Run the following command to fetch the container ID of "install-cni".
Fetch Container ID for "install-cni" |
---|
|
Use the output from step 2 and replace it in the below command.
Command to start the container |
---|
|
Issue 3431398: Node pool created with additional interface tkg-xxx. (DHCP v4 is enabled on tkg-xxx); Additional secondary network is connected to isolated DHCP v4 network which has no connection to management components but only NFS storage or vsphere-csi; Node gets DHCP address on secondary interface and calico doesn't work normally after this. This cause the vsphere-csi failed to resolve VSAN File-service FQDN and mount the file volume failed.
Workaround:
Control to does not configure IPv4 gateway info in DHCPv4 server, so that node VM will receive DHCP address and applies on node interface without having gateway info.
Issue 3429598: Intermittently nodepool upgrade or edit may get stuck in processing state on the TCA UI. However, nodepool and nodepolicy CRs are in Provisioned phase.
Workaround:
From the TCA UI edit this nodepool again without changing anything and save it.
Issue 3431884: When vsphere-csi addon is configured with Network Permissions, addon status may stuck with Processing forever after changing VC Username or Password of vsphere-csi addon.
Workaround:
Edit vsphere-csi addon, remove all Network Permissions and save. Wait until addon status is Provisioned, edit vsphere-csi addon, add back all Network Permissions and save.
Issue 3430565: After management cluster upgrade failed, cluster will be in "Not Active" status. If now user re-trigger cluster upgrade and fail at pre-validation stage, user will not be able to click retry or upgrade cluster button from cluster operation list.
Workaround:
After fixing the issue which caused upgrade pre-validation failure, on management cluster page, navigate to upgrade tab and add a new upgrade.
Issue 3409181: Some Pods may stuck with status Terminating forever with the following error in kubelet log:
Aug 27 08:07:48 e2e-wc3-np1-jmx78-wwvnh kubelet[1140]: time="2024-08-27T08:07:48Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/devices/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65aa9eb7_a6f9_4da1_a92e_ecef5a38b124.slice/cri-containerd-acad39d5e10a9cabbf80a5471f13d6e35f1e388524bb24c8b370a457c28b6da7.scope: device or resource busy"
Workaround:
Restart the VM of the Kubernetes node where Terminating Pods runs on. Post this, Terminating Pods will be cleaned.
Issue 3429495: From UI user cannot deploy workload cluster without register Airgap server as a partner system in advance if the selected management cluster uses a Airgap repository.
Workaround:
Add the Airgap server in the partner system and re-create the workload cluster.
Issue 3423629: When user adds network adapters via "Node Policy" step in add/edit nodepool wizard, those network adapters won't appear in "Node Customization" UI tab.
Workaround:
Click on Workload Cluster then Navigate to Node Pool.
Click on Conditions then click on VIEW MORE DETAILS.
Check the NodePolicy of that particular node pool for the secondary adapters which are added via Node Policy.
Issue 3421824: When a workload cluster with velero addon is created by "Copy Spec" function, the copied velero addon won't work due to unrecognized encoded storage service credential.
Workaround:
Uninstall the failed velero and fresh install it again.
Issue 3412873: When the workload cluster has Multus addon deployed, sometime a Kubernetes node may fail to be drained. Some pods running on the node are stuck with Terminating state. Multus pod on the node is running with the following error in the log:
Unable to rotate token: failed to read token file "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
Workaround:
Use 'crictl stop <multus container name>' or 'kubectl delete pod -n kube-system <multus pod name>' to force multus container to restart. Then wait a while and node will be drained successfully.
Issue 3413008: When the workload cluster use NSXT backend network, and NSXT Gateway (Tier0 or Tier1) resides in the datapath of the cluster and the NFS storage server, the NFS client addon would fail to deploy. This is because the NFS storage server default has the secure option to only allow the nfs client initial TCP port 1-1024, however NSX T Gateways enact stateful Network Address Translation which creates a new TCP stateful session after address translation and thus the TCP source port will be changed to above 1024.
Workaround:
Modify the NFS storage server options to "insecure" or follow the workaround described in NSXT KB – https://knowledge.broadcom.com/external/article?legacyId=71104.
Issue 3284221: The management cluster creation would fail due to the vSphere resource Folder was resolved to multiple resources in KBS. The root cause is multiple folders with the same name under different parent folder exist in vCenter, but currently only the resource name is handled rather than the resource absolute path in TCA, then the issue occurred when KBS attempted to create a VM in the vCenter cloud.
Workaround:
Users should not select a VM folder with a duplicate name within the specified vSphere Cluster when create a management cluster.
Issue TEAC-17962: The CaaS Operation could encounter an error, "failedToLoginMaxUserSessionCountReached," indicating that the maximum session count has been exceeded on vCenter Server.
Cannot log into vCenter 'https://10.202.215.1' with provided credentials: {"status":"failure","statusCode":503,"details":"","result":{"type":"com.vmware.vapi.std.errors.service_unavailable","value":{"error_type":"SERVICE_UNAVAILABLE","messages":[{"args":["550","550","[email protected]"],"default_message":"User session count is limited to 550. Existing session count is 550 for user [email protected].","id":"com.vmware.vapi.endpoint.failedToLoginMaxUserSessionCountReached"}]}}}
By logging into vCenter Server, it is observed that k8s-csi-useragent and k8s-capv-useragent is holding a lot of sessions leading to maximum user session count.
Workaround:
Follow https://kb.vmware.com/s/article/88668 to restart the vCenter Server, which will free up the sessions by removing long-lived idle sessions.
Issue 3432346: When user configures harbor to a workload cluster as an add-on (and not via partner systems), then the said harbor is NOT monitored by certificate observability, and hence not shown on the Connected Endpoints page.
On adding harbor as an add-on, inventory of the harbor is not collected by TCA-CP, due to which certificate observability service running on the TCA-CP is not aware of the harbor.
Workaround:
Register the harbor as a partner system for the endpoint to be monitored by certificate observability.
Issue 3423785: Error: Precheck failed. One of the mandatory disks(lvm_snapshot) required for update is not present. This is known CAP issue.
Workaround:
Take VM snapshot on VC before upgrade, if any failure hit during upgrade, revert to the snapshot and redo the upgrade.
Inside Airgap appliance, mount the volume and redo upgrade. To mount the volume:
mount /dev/mapper/vg_lvm_snapshot-lv_lvm_snapshot /storage/lvm_snapshot
Issue 3434691: While switching auth mechanism from VC to AD or AD to VC, no warning is displayed. User needs to be careful while performing this action.
Workaround:
None.
Issue 3434089: Inappropriate error is thrown if the RBAC user tries to instantiate VNF on a VIM but does not have the privilege to do so (User is able to see the VIM because of Parent-Child relationship).
Workaround:
None.
Issue 3436972: RBAC user associated with multiple permissions where one permissions filters based on tags and the other permission without filters is applying Logical OR between permissions. Whereas it should be Logical AND between permissions.
Workaround:
While configuring permissions, user should carefully set the filters in such a way that all the required resources are seen. Or do not define filters in any permission if the requirement is to see all the resources.
Issue 3437149: Upgrade workflow fails with 401, message='Unauthorized' exception (this occurs as token got expired).
Workaround:
Restarts Infra LCM service from TCA-CP's appliance manager > Appliance Summary.
Login to WFH from TCA-UI & navigate to the tab: Workflow Hub > Runs.
Point to the failed workflow and execute Re-run by using existing payload (user can edit the payload if required).
Issue 3420722: Assessment workflow cannot be run using service account in vCenter that has only vlcm permissions.
Workaround:
The TCA Infra LCM service currently supports 2 workflows namely Assessment and Upgrade. While running these workflows several operations are performed by invoking several APIs on vCenter server to obtain the necessary data. However, the user with which the vCenter session is created needs to have certain set of privileges in order to execute these workflows E2E successfully.
All privileges available under following categories are selected but this list would be refined until only minimal privileges are finalized.
Any user defined role should be cloned from existing "Read-only" role.
The privileges needed to perform different operations through the LCM service are listed below. The categories listed below with all of their privileges should be selected for the role assigned to the user account.
Certificates
VMware vSphere Lifecycle Management
Issue 3426230: If TCA-CP or Infra-lcm-spoke pod gets restarted, then TCA-M reports the upgrade request as a failure after its time-out period. Similarly is the case if TCA-M or infra-lcm-hub pod restarts.
Workaround:
Login to WFH from TCA-UI & navigate to the tab: Workflow Hub > Runs
Point to the failed workflow and executed Re-run by using existing payload (user can edit the payload if required).
Issue 3438886: Tenant login fails if the AD organizational unit has more than 1000 users.
Workaround:
Create an AD organizational unit with less than 1000 users. Reconfigure the AD details in Appliance Management or update Tenant IDP in TCAM Console. And then retry login.
Issue 3438872: Cannot add group to permission without editing AD details in Appliance Management.
Workaround:
Login to appliance mgmt UI and edit the AD user SSO details. (Re-enter the password and Save).
Try validating the group in TCAM Console UI now. Validation works.
If the issue is with updation of Tenant IDP, then update the Tenant IDP in TCAM Console UI as System Admin.
Issue 3438967: If AD has misconfigured user, then configuring AD or adding Tenant IDP fails with Internal Server Error.
Workaround:
Rectify the misconfigured user in AD Server. Try to update the AD config in Appliance Management UI or Tenant IDP in TCAM Console.
Issue 3353984: TCA DHCP based IPv6 or Dualstack deployment do not get IPv6 address.
Workaround:
None.
TCA prefers IPv6 and Dualstack deployment with Static IPs.
Issue 3327569: TCA prefers IPv4 address over IPv6. Due to this behaviour it causes issues.
If IPv6 only TCA appliance receives IPv4 & IPv6 address as response from DNS server for its FQDN, then TCA uses IPv4 and TCA appliance becomes inaccessible
TCA VIM registration with IPv6 address does not work
Workaround:
When using IPv6 only TCA appliance, make sure DNS server has IPv6 only record for the TCA FQDN.
TCA do not support VIM registration using IPv6 Address, instead use FQDN with IPv6 only DNS record.
Issue 3433107: While trying to import an airgap bundle, one of the steps is to copy the airgap exported bundle to /photon-repo. Copying this bundle to /photon-repo fails with admin / root users.
Workaround:
Please refer to Copying airgap export bundle to /photon-reps on airgap server fails to know more.
Issue 3440277: Inappropriate error thrown when invalid username & password is provided while registering Git to Partner system.
Workaround:
None
OpenSSH is updated to address CVE-2024-6387
Updated for VMware Telco Cloud Automation and Airgap appliances.
STIG Compliance
VMware Telco Cloud Automation appliances (TCA-M and TCA-CP) are secured by following the Security Technical Implementation Guide (STIG) guidelines for Photon OS.
STIG Exceptions and Compensating Controls for Photon OS in TCA Appliances is added in the VMware Telco Cloud Automation Security Configuration Guide.