To configure VMware Telco Cloud Service Assurance with public network, update the parameters in the deploy.settings and values-user-overrides file.

Note: Before deploying VMware Telco Cloud Service Assurance on AWS public network, ensure that you configure EKS cluster. For more information, see Prerequisites for Deploying VMware Telco Cloud Service Assurance on AWS.
  • To obtain the <config> for EKS cluster, run the following command inside the Deployment Container:
    eksctl utils write-kubeconfig --cluster=<cluster-name> --profile <your-profile-ID> 
    --region <your-region> --kubeconfig=~/.kube/<config>
    You must provide your the EKS cluster-name, your-profile-ID, and aws-region.
  • Update VMware Telco Cloud Service Assurance product details and deployment timeout period.
    # Name of the product to be deployed/installed
    PRODUCT=tcsa
    # Product helm config
    
    # allowed values for footprint "demo", "25k", "50k", "75k", "100k", "125k", "150k", "175k", "200k"
    FOOTPRINT=
    # Time to wait for the deployment to complete, in seconds. Maximum timeout is 3600 seconds. 
    # For deployment above 50K footprint, specify the deployment timeout value in seconds to a higher value.
    PRODUCT_DEPLOYMENT_TIMEOUT=1800
    
    # ========= Deployment modes and actions ========== #
    # Options are "init", "deploy-all", "deploy-apps" or "cleanup"
    DEPLOYMENT_ACTION="deploy-all"
    ## Set this to '--force' if you want to cleanup by force without waiting for user confirmation
    DELETE_ARGS=
  • Update deployment location and AWS configuration.
    • Set AWS_REGION to the same region as your EKS cluster.
    • Set AWS_PROFILE to the profile ID associated with your account.
    # ========== Deployment Location ========== #
    # LOCATION can be "on-prem", "aws", "tkg", or "azure"
    LOCATION=aws
    
    # ======== Cloud specific configuration ======
    # AWS specific configuration
    AWS_REGION=<your-region>
    AWS_PROFILE=<your-profile-ID>
  • Update registry details of the AWS.
    • For the REGISTRY_URL, use the same region as your EKS cluster.
      Note: The username and password are automatically picked if you use the Docker login command for the ECR registry.
    # ========== Registry details ========== #
    # Enclose the REGISTRY_USERNAME and REGISTRY_PASSWORD in single quotes
    REGISTRY_URL=<your-profile-ID>.dkr.ecr.<aws-region>.amazonaws.com/<unique-name-for-your-cluster>/tcx
    REGISTRY_USERNAME
    REGISTRY_PASSWORD
    # If the registry uses certificates, path to the certificates file (.crt)
    REGISTRY_CERTS_PATH=
  • Get the public IP (static Elastic IP) and public subnetID for your cluster VPC.
    1. Get the PublicIpX and AllocationIDX by executing the following command:
      aws ec2 allocate-address --profile <your-profile-ID> --region <your-region>
      {
          "PublicIp": <PublicIpX>,                            <==== Store this for product deployment
          "AllocationId": <AllocationIDX>,                    <==== Store this for product deployment
          "PublicIpv4Pool": "amazon",
          "NetworkBorderGroup": <aws-region>,
          "Domain": "vpc"
      }
    2. Get the PublicIp and AllocationIDY by executing the following command:
      aws ec2 allocate-address --profile <your-profile-ID> --region <your-region>
      {
          "PublicIp": <PublicIp>,                            <==== Store this for product deployment
          "AllocationId": <AllocationIDY>,                    <==== Store this for product deployment
          "PublicIpv4Pool": "amazon",
          "NetworkBorderGroup": <aws-region>,
          "Domain": "vpc"
      }
    3. Uncomment and update the PublicIp value in the values-user-overrides.yaml file available in the <TCSA_WORK_SPACE>/product-helm-chart/tcsa directory.
      ingressHostname:
        product: <PublicIpX>
        edgeServices: null
        
      grafana:
       accessIp: <PublicIpX>
    4. Get the public subnetID by running the following command:
      aws ec2 describe-subnets --profile <your-profile-ID> --region <your-region> --filters Name=tag:alpha.eksctl.io/cluster-name,Values=<your-eks-cluster-name> Name=tag:kubernetes.io/role/elb,Values=1 | jq '.Subnets[0] .SubnetId'
    5. Uncomment and update the values for AllocationID and subnet-ID in the same values-user-overrides.yaml file as shown in the following codeblock.
      Note: Get the AllocationID X and Y from step 1 and step 2.
      eks:
        externalIpAllocationId: <AllocationIDX>
        loadBalancerSubnets:
         - <subnet-ID>
      
      edge:
        # Has to be different from the one set for TCSA dashboard
        externalIpAllocationId: <AllocationIDY>
        # YAML list of subnet
        loadBalancerSubnets:
          - <subnet-ID>
        
      privateNetwork: false
  • If you want to update the metrics retention interval period during deployment, see Configure Metrics Retention Interval Period topic.
  • Trigger deployment by running the following installation scripts inside the Deployment Container.
    root [ ~ ]# cd /root/tcx-deployer/scripts/deployment/
    root [ ~/tcx-deployer/scripts/deployment ] # ./tcx_app_deployment.sh
    After the deployment script exits, manually check the VMware Telco Cloud Service Assurance deployment status by running the following command from the deployment VM.
    root [ ~/tcx-deployer/scripts ]# kubectl get tcxproduct 
     OR
    root [ ~/tcx-deployer/scripts ]# kubectl get apps
    1. Get the current TcXProduct specification for the VMware Telco Cloud Service Assurance by running the following command:
      root [~] # kubectl get tcxproduct tcsa -o yaml > /tmp/tcsa.yaml
    2. Get the index of istio-ingressgateway from the TcxProduct specification list of applications.
      root[~] # APP_NAME="istio-ingressgateway"
      root ~] # APP_INDEX=`expr \`yq ".spec.apps[].name" /tmp/tcsa.yaml | tr  '\n' ' ' | awk -v v="${APP_NAME}" '{v2i[v]=0; for (i=1;i<=NF;i++) v2i[$i]=i; print v2i[v]}'\` - 1`
      For example, the APP_INDEX must display the index after executing the step 2 commands.
      root [~] # echo $APP_INDEX
      7
    3. Get the Helm overrides for istio-ingressgateway from the VMware Telco Cloud Service Assurance specification by running the following command:
      root [~] # yq ".spec.apps[${APP_INDEX}].helmOverridesBase64" /tmp/tcsa.yaml | base64 -d > /tmp/istio_decoded.yaml
    4. Edit the decoded Helm overrides for istio-ingressgateway. Edit /tmp/istio_decoded.yaml.
      gateway:
        service:
          loadBalancerIP: 3.21.88.52 #  <--- Remove this line
          annotations:
            service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-090d9f9c7967d09a6"
            service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-07b3b0a9ebd72da5b
            service.beta.kubernetes.io/aws-load-balancer-type: nlb  # <--- Add this line
          # Add the below yaml config
          ports:                    
            - port: 15021         
              targetPort: 15021   
              name: status-port   
              protocol: TCP
            - port: 80
              targetPort: 8080
              name: http2
              protocol: TCP
            - port: 443
              targetPort: 8443
              name: https
              protocol: TCP   
    5. Get the index of kafka-edge from the TcxProduct specification list of applications.
      root [~] # APP_NAME="kafka-edge"
      root [~] # APP_INDEX=`expr \`yq ".spec.apps[].name" /tmp/tcsa.yaml | tr  '\n' ' ' | awk -v v="${APP_NAME}" '{v2i[v]=0; for (i=1;i<=NF;i++) v2i[$i]=i; print v2i[v]}'\` - 1`
      For example, the APP_INDEX must display the index after executing the step 5 commands.
      root [~] # echo $APP_INDEX
      10
    6. Get the Helm overrides for kafka-edge from the VMware Telco Cloud Service Assurance specification by running the following command:
      root [~] # yq ".spec.apps[${APP_INDEX}].helmOverridesBase64" /tmp/tcsa.yaml | base64 -d > /tmp/kafka_edge_decoded.yaml
    7. Edit the decoded Helm overrides for kafka-edge. Edit /tmp/kafka_edge_decoded.yaml.
      kafka-strimzi:
        istio:
          loadBalancerIP: 13.58.64.236    # <--- Remove this line
          serviceAnnotations:
            service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-072d298aa29fda628"
            service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-07b3b0a9ebd72da5b
            service.beta.kubernetes.io/aws-load-balancer-type: nlb # <--- Add this line
          # Add the below yaml config
          ports:
          - port: 32092
            targetPort: 32092
            name: edge-kafka-external-bootstrap
            protocol: TCP
          - port: 32095
            targetPort: 32095
            name: edge-kafka-external-0
            protocol: TCP
          - port: 32096
            targetPort: 32096
            name: edge-kafka-external-1
            protocol: TCP
          - port: 32097
            targetPort: 32097
            name: edge-kafka-external-2
            protocol: TCP     
    8. Encode the new istio-ingressgateway override. This generates a new base64 string. Store this value for later use.
      base64 /tmp/istio_decoded.yaml  -w 0

      The encoded output must be updated in step 11.

    9. Encode the new kafka-edge override. This generates a new base64 string. Store this value for later use.
      base64 /tmp/kafka_edge_decoded.yaml  -w 0

      The encoded output must be updated in step 11.

    10. Delete the existing applications and wait for them to get deleted.
      1. Istio Ingressgateway
        root [~] # kubectl delete app istio-ingressgateway
        Note: If the deletion is stuck for too long, you must run the following command.
        root [~] # kubectl delete service istio-ingressgateway -n istio-system
      2. Kafka Edge
        root [~] # kubectl delete app kafka-edge
        Note: If the deletion is stuck for too long, you must run the below command.
        root [~] # kubectl delete service/istio-edge-ingressgateway -n kafka-edge
    11. Update the TcxProduct specification with new configurations for istio-ingressgateway and kafka-edge.
      Note: Update the encoded string Helm overrides for istio-ingressgateway and kafka-edge as shown in the following codeblock.
      kubectl edit tcxproduct tcsa
       
      ...
       - deleteOnProductUpgrade: false
          helmNamespace: istio-system
          helmOverridesBase64: <...>     <--- Replace this value with the new encoded istio override config from Step 8
          imgpkgTag: 3.0.0-257
          isOperator: false
          kappControllerSyncPeriod: 100000h0m0s
          kappNamespace: istio-system
          name: istio-ingressgateway
          namespace: default
       
       
      ...
        - deleteOnProductUpgrade: false
          deploymentWaitTimeout: 10m0s
          helmChartName: kafka-strimzi-tcops
          helmNamespace: kafka-edge
          helmOverridesBase64: <...> <--- Replace this value with the new encoded kafka-edge override config from Step 9
    12. Wait for the VMware Telco Cloud Service Assurance to reconcile successfully.
      root [~] kubectl get tcxproduct tcsa -w
      For all the apps, the reconciliation status must be successful.
      root [ ~/tcx-deployer/scripts/deployment ]# kubectl get tcxproduct
      NAME   STATUS            READY   MESSAGE                               AGE
      tcsa   updateCompleted   True    All App CRs reconciled successfully   30h
      Note: After successful deployment, you can launch the VMware Telco Cloud Service Assurance UI. For more information, see Accessing VMware Telco Cloud Service Assurance UI topic.

      If all the apps are not reconciled, the deployment fails. For more information, see VMware Telco Cloud Automation Installation Issue in the VMware Telco Cloud Service Assurance Troubleshooting Guide.

    To access the VMware Telco Cloud Service Assurance user interface through DNS hostname, perform the procedure mentioned in Step 2 deployment of the private network.

Note: If you want to deploy on vSAN without any other storage available for AWS S3 compatible storage, you must deploy MinIO. For more information, see Customizing Backup and Restore Configurations topic.