This section provides instructions to instantiate the Helm charts as CNFs.
Instantiate the tcsa-init CNF
Before instantiating the Helm charts as CNFs, ensure that you create a VMware Tanzu Kubernetes Grid workload cluster. For more information, see Deploying VMware Tanzu Kubernetes Grid Workload Cluster.
- Navigate to the Network Function catalog and instantiate
tcsa init
CNF.- Enter Name. For example, tcsa init, and select the VMware Tanzu Kubernetes Grid workload cluster on which you want to deploy it.
- Enable Auto Rollback in Advanced Settings and click Next.
- Set the Namespace to default and select the default library chart repository, that is, the /chartrepo/library endpoint of the associated registry and click Next.
- The Network Function Properties page appears. Click Next.
- In the Inputs section, retain the default value and click Next.
- Click Instantiate.
Instantiate the admin-operator CNF
- Navigate to the Network Function catalog and instantiate admin operator CNF.
- Enter Name. For example, admin operator, and select the VMware Tanzu Kubernetes Grid workload cluster on which you want to deploy it.
- Enable Auto Rollback in Advanced Settings and click Next.
- Set the Namespace to default and select the default library chart repository, that is, the /chartrepo/library endpoint of the associated registry and click Next.
- The Network Function Properties page appears. Click Next.
- In the Inputs section, retain the default value and click Next.
- Click Instantiate.
Instantiate tcsa (VMware Telco Cloud Service Assurance) CNF
- Run the merge-product-yaml-files.sh script as described in Generate a Single Merged YAML File required for
tcsa
CNF instantiation and copy the resulting merged_yamls.yaml file to your local machine. - Enter the desired footprint value for appSpecs.adminApi.helmOverrides.productInfo.footPrint parameter in the merged_yamls.yaml file.
- Navigate to the Network Function catalog and instantiate VMware Telco Cloud Service Assurance CNF.
- Enter Name. For example, tcsa, and select the VMware Tanzu Kubernetes Grid workload cluster on which you want to deploy it.
- Enable Auto Rollback in Advanced Settings and click Next.
- Set the Namespace to default and select the default library chart repository, that is, the /chartrepo/library endpoint of the associated registry and click Next.
- In the Inputs section, update the following parameters:
- Set registryRootUrl to the same value as the
--registry-url
argument to the installer script in the Push Artifacts to Registry topic. - Set ingressHostname.product to the virtual IP of your VMware Tanzu Kubernetes Grid cluster.
- Set footprint to the VMware Telco Cloud Service Assurance footprint that you are deploying. For example, demo, 25 K, 50 K, 75 K, 100 K, and so on.
- (Optional) Set ingressHostname.edgeServices to the IP address to use in case you want to access external Kafka.
- Set statusChecker.enabled to check the status of the VMware Telco Cloud Service Assurance CNF. The default value is
false
.Note: The statusChecker.enabled parameter is disabled in VMware Telco Cloud Service Assurance because VMware Telco Cloud Automation does not support CNF timeouts. - Set elasticsearch.retentionInterval to the desired retention period of metrics in Elasticsearch.
Note: By default, the retention interval is 1w. The retention interval values can be 1w, 2w, 3w, 4w, 5w, 4w, or 7w.
- Set appSpecs.elasticsearch.additionalValuesFile to one of the values like values-retention-1w, values-retention-2w, values-retention-3w, values-retention-4w, values-retention-5w, values-retention-6w, or values-retention-7w.
Select the same retention period as elasticsearch.retentioninterval. For example, values-retention-1w for 1 week. This is an additional configuration required to properly configure the retention of metrics in Elasticsearch.Note: Setting the Elasticsearch retention parameters is optional. By default, the retention interval is 1w for elasticsearch.retentioninterval and values-retention-1w for appSpecs.elasticsearch.additionalValuesFile.If you want to store the backup and restore in a AWS cloud, you must enable the following parameters in the VMware Telco Cloud Automation UI:
- appSpecs.minio.helmOverrides.minio.gateway.enabled
- appSpecs.minio.helmOverrides.minio.gateway.auth.s3.accessKey
- appSpecs.minio.helmOverrides.minio.gateway.auth.s3.secretKey
- appSpecs.brOperator.helmOverrides.minio.backupBucketName
- For additionalValuesFile, upload the merged merged_yamls.yaml file generated using the merge-product-yaml-files.sh script from step 1.
- Set registryRootUrl to the same value as the
- Click Next.
- Click Instantiate.
- Use kubectl command to manually check the status of VMware Telco Cloud Service Assurance CNF while checking for the corresponding Custom Resource which is
tcxproduct
.root [ ~/tcx-deployer/scripts ]# kubectl get tcxproduct NAME STATUS READY MESSAGE AGE tcsa-210-abe98-7vckg updateCompleted True All App CRs reconciled successfully 41m root [ ~/tcx-deployer/scripts ]# kubectl get tcxproduct tcsa-210-abe98-7vckg
Note: After running the kubectl command, wait until the following instantiation message is displayed.- The deployment time for VMware Telco Cloud Service Assurance can vary based on scale. So, the
kubectl get tcxproduct <instance-name> -w
command must run according to scale. - A successful instantiation shows the following fields:
- STATUS: updateCompleted
- READY: True
- Message: All Apps CRs reconciled successfully
- The deployment time for VMware Telco Cloud Service Assurance can vary based on scale. So, the