To start the data flow for VMware Telco Cloud Automation Pipeline reports, you must configure the VMware Telco Cloud Service Assurance with Kafka brokers in Airflow, or Edge Kafka, or any external Kafka.

For information about how to view the VMware Telco Cloud Automation reports, see View Pipeline Reports in VMware Telco Cloud Service Assurance User Guide.

Prerequisites

Ensure that you configure either Kafka brokers in Airflow, or Edge Kafka, or any external Kafka for ingesting data into VMware Telco Cloud Service Assurance.
  • If you are using Kafka brokers in Airflow for ingesting the data into VMware Telco Cloud Service Assurance, then perform the following steps:
    1. For Kafka brokers, navigate to Airflow > Variables and configure kafka_brokers variable.

      Note: Airflow used here is a third-party platform.
      Following is an example of kafka_brokers configuration in Airflow:
      [{
          "bootstrap_servers": ["kafka-edge:32092 "],
          "topic": "tca-stages",
          "security_protocol": "SASL_SSL",
          "additional_properties" : {
              "sasl_mechanism": "SCRAM-SHA-512",
              "sasl_plain_username": "kafka-scram-sha-512-client-credentials",
              "sasl_plain_password" : "RDFYek9vSTJCNGRYCg=="
          },
          "password": "Y2EucGFzc3dvcmQK",
          "ssl_cafile": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZMVENDQXhXZ0F3SUJBZ0lVVlcrT2lEYVlUM3QyeWh6T3lidHJEelFrcXB3d0RRWUpLb1pJaHZjTkFRRU4KQlFBd0xURVRNQkVHQTFVRUNnd0thVzh1YzNSeWFXMTZhVEVXTUJRR0ExVUVBd3dOWTJ4MWMzUmxjaTFqWVNCMgpNREFlRncweU1qQTVNVE14TWpJMU5UVmFGdzB5TXpBNU1UTXhNakkxTlRWYU1DMHhFekFSQmdOVkJBb01DbWx2CkxuTjBjbWx0ZW1reEZqQVVCZ05WQkFNTURXTnNkWE4wWlhJdFkyRWdkakF3Z2dJaU1BMEdDU3FHU0liM0RRRUIKQVFVQUE0SUNEd0F3Z2dJS0FvSUNBUUMyR2l6cTl0aEs2aHhqaS9PeDRlNHprVVdDK3hWQzVNTW5ybGlzSWUrWApyMXFVWllnNTJxYWxJc0NaeGlER1FiM3BMb3R4bTJGNEhOQm10NlJuVWQ3QkJwaHFkcys3MjFSanpRZ1BZbmYyCkVtU0doOHMrai8wWlMzMmQ0ZUtKKzMwMVlSa0ljSkNQa3dTOWN5SkNVMU95OWoyTG9FZ21PNnlSVmplVm9YaFEKYjJ3ZmpqY2d1bTFkYmNYanNSY1R1cHRxclk4MlJVbWVoL0pPbkh1d016TUFFUElBYU9WZEpvVytrRW5oTFF0Zwp3cEUvMHltNncxelhWZm5iSjlXZUNkcnUwaEdJZ2ZkaWZ1MUdJTnM4UDVCcmU2TGZhcjRvZmlYMXJLNENHMHZWCkxKTTJoS29DYm1zRGhpNXc3azgxZ1V4b3hicDBKMTZWQWs5UWtCdnZwVG5zZnBhTDh5ZWMwMTBRS1ZtS0VUQWgKMHFVZFhxRy9XaUE5bVZlaU1lY1owSlp1RTViY2hsZnVrdTFoT0dBRlVvdVVDaDJuV1lYcHg5azdueW01VlN5ZQpEb3puSU8rSkpLMWREVlZwbUVJaUJSbzRrZTBySVhBZ0ZMc3F1U2tLNStlWEZBUkJLVEdISUlEWWYzeFZKdVJ5CkVpWkQ4NGdrK0RpSC9kTThGWkFrTXpjRGdCaUdKTjA0QysralZNYXZqd0pNRzlVV2FhaGgzV3UraHBlY2g3VnMKT0VtaXRHRGxTWUszWHdMNktYSnUrRjMwZkNyYlBGc2xIaTFmTjR5OVRGWWo0UmFOSmREUG0rQmRtMDEvR0taUQpjakhNbWU0VzhTa3J4TlhxLzhwWTJhdVE4cEVYWmlWdlNPRHhDRmMwbEQzVisxUXo0WVVrZXhQanBuNTVUQ1RtCjd3SURBUUFCbzBVd1F6QWRCZ05WSFE0RUZnUVVsR0traFpZTTNUN0FyQmM1SGRLT2w1SWJlSDh3RWdZRFZSMFQKQVFIL0JBZ3dCZ0VCL3dJQkFEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0RRWUpLb1pJaHZjTkFRRU5CUUFEZ2dJQgpBSG9MM2I5V1JWZUc5dElPQzROcCtZanVWclBiaEtoaTY0UU8vVEw0OVdJVjNRYkxsV0tlRkhacUtQNkJKV29qCi9EYkFtUVNaeWxmbkNzejZubll0RDNJcnFjN2w5WENGNjAwZEZaRCtGNkp1dnpCdlFxR3VJYlAza0czOHlDdU4KQWdyWHBiaXlRQXhRMzhXSnRFSlc4UHFzVzZtdnFSSEY2YUt4VXg4eDF4dWZKSHY4aWcxcG9PT1VqNWZ1Y044QwpQUWY1NGEzWm5YajA2K0QxM1Q2dnh6dzRlMVQyVllYWEtCSEFDL0hIZ095akZpbUhJeDg3eU9Dczduc2Q3c0NwCnU4RE9aT21URWRlUEkxcW95R2w5VmNBdjdVdERjdHI2dmNmdHFHZ2dMNUtEaTVFd09odDdjL1VuNEd0OFQwQTUKNDRWejJvdnZvcDFkU1JuNWplVHFocmxxUTRCVENrb0tSUFIxeEduMmFid1oxOWJFMEVzOXhjMkhWVEk4cDBDYwpJT085SW1mcVptVlBqQjc3cmZSZVpNWmRtTmpqOVVmeHBJMGZvcXZKN21kTTFYNzR1S2tCbTVkbG1sdUFRRE5rCk9BUFdhMnVvVXJEOGhkSUgrT0MrTzFueE02SW9Odm85aU5sdDFUaGl1RC9Mb2EvMUw0YWljQk9pK2dKbjk0UWEKclpQVlo5d2RxNjYvT0VoNlpyZmtadU1iMmVaRUZmZDJzN1dPYW54V0s1YXlNTTZKbmJLUUhjQ25iUWJZUG1lVQpNenF5c090SWxqeWQ1VnVQbVlzLyt1RUd5S1V6U2RIN1NQYTloeXFGWWJKUmJTY09yVFdhNTlTazlCZ1dwREtqClJ0ZkdHdzRieUZsd2FNbW5kYk11L25hU0M2V005eVVsSWJFdmJzWk9hTDUxCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
      }]
      Get the passwords and certificate by executing the following commands from the VMware Telco Cloud Service Assurance deployment VM:
      ssl_cafile:
      export CLUSTER_NAME=edge
      EDGENS=kafka-edge
      kubectl get secret -n $EDGENS $CLUSTER_NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 --decode > ca.crt
      kubectl get secret -n $EDGENS $CLUSTER_NAME-cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 --decode > ca.password
      
      sasl_plain_password:
      export USER_NAME=kafka-scram-sha-512-client-credentials
      export SCRAM_PASSWORD_FILE_PATH=user-scram.password
      kubectl get secret -n $EDGENS $USER_NAME -o jsonpath='{.data.password}' > $SCRAM_PASSWORD_FILE_PATH
      export SCRAM_PASSWORD=`cat $SCRAM_PASSWORD_FILE_PATH`
    2. Run the following statefulset commands:
      kubectl get statefulset -n airflow 
      NAME                 READY   AGE
      airflow-postgresql   1/1     78d
      airflow-redis        1/1     78d
      airflow-worker       3/3     78d
      capv@airflow-cnc-control-plane-vjbx5 [ ~ ]$ 
  • If you are using Edge Kafka for ingesting data into VMware Telco Cloud Service Assurance, then perform the following steps from the VMware Telco Cloud Service Assurance Control Plane.
    1. Log in to the VMware Telco Cloud Service Assurance Control Plane and go to a temporary directory to work.
    2. Create a shell script with the following commands and execute it:
      EDGENS=kafka-edge
      kubectl get secret -n $EDGENS $CLUSTER_NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 --decode > ca.crt
      kubectl get secret -n $EDGENS $CLUSTER_NAME-cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 --decode > ca.password
      export CERT_FILE_PATH=ca.crt
      export CERT_PASSWORD_FILE_PATH=ca.password
      export KEYSTORE_LOCATION=cacerts
      export PASSWORD=`cat $CERT_PASSWORD_FILE_PATH`
      export CA_CERT_ALIAS=strimzi-kafka-cert
      keytool -noprompt -importcert -alias $CA_CERT_ALIAS -file $CERT_FILE_PATH -keystore $KEYSTORE_LOCATION -keypass $PASSWORD -storepass $PASSWORD
      export USER_NAME=kafka-scram-sha-512-client-credentials
      export SCRAM_PASSWORD_FILE_PATH=user-scram.password
      kubectl get secret -n $EDGENS $USER_NAME -o jsonpath='{.data.password}' | base64 --decode > $SCRAM_PASSWORD_FILE_PATH
      export SCRAM_PASSWORD=`cat $SCRAM_PASSWORD_FILE_PATH`
      
    3. Files below shall be generated by executing the script in the previous step:
      ca.crt —>  The TCSA Edge Kafka certificate.
      ca.password —>  The TCSA Edge Kafka certificate password.
      cacerts —>  The keystone/truststore file with the Edge Kafka certificate.
      user-scram.password —>  The SASL password.
      
    4. Make note or send the follow information to have the Kafka Producer configured to send messages to the TCSA Edge Kafka:
      Hostname: kafka-edge
      TCP/IP Port: 32092
      Topic: tca-stages
      Security Protocol: SASL_SSL
      SASL Mechanism: SCRAM-SHA-512
      SASL Plain Username: kafka-scram-sha-512-client-credentials
      SASL Plain Password: <contents of file user-scram.password generated in step 2>
      SSL Certificate: <contents of file ca.crt generated in step 2>
      SSL Certificate Password: <contents of file ca.password generated in step 2>
      
      Note: Also send the cacerts file generated in step to all Kafka Producers that will generate messages to the VMware Telco Cloud Service Assurance Edge Kafka.
    5. Log out from the VMware Telco Cloud Service Assurance Control Plane.
  • Perform the following steps for each Kafka Producer that will generate messages to the VMware Telco Cloud Service Assurance Edge Kafka:
    1. Log in to the Kafka Producer machine via a CLI (For example, SSH)
    2. Add the following entry to the machine’s /etc/hosts file where <TCSA-CLUSTER-VIP> is the external IP address of the VMware Telco Cloud Service Assurance Cluster.:
      <TCSA-CLUSTER-VIP>   kafka-edge
    3. Copy the cacerts generated in step 2 in to the directory where the truststores are located or create a directory to place it if needed and make note of the path. In this procedure, this path will be called <TRUSTSTORE_PATH>.
    4. Create a temporary file with a sample message to be sent to the VMware Telco Cloud Service Assurance Edge Kafka and make note of its path. In this procedure, this path will be called <SAMPLE_DATA_PATH>.
    5. Verify if messages can be sent from the Kafka Producer machine to the VMware Telco Cloud Service Assurance Edge Kafka by executing the following command:
      <KAFKA_PATH>/bin/kafka-console-producer.sh \
      --bootstrap-server <EDGE_KAFKA_HOST>:<EDGE_KAFKA_PORT> \
      --topic <EDGE_KAFKA_TOPIC> \
      --producer-property ssl.truststore.location=<TRUSTSTORE_PATH> \
      --producer-property ssl.truststore.password=<CA_PASSWORD> \
      --producer-property sasl.mechanism=<SASL_MECHANISM> \
      --producer-property security.protocol=<SECURITY_PROTOCOL> \
      --producer-property 'sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
      username="<SASL_USERNAME" \
      password="<SASL_PASSWORD>";' < <SAMPLE_DATA_PATH>

      where

      <KAFKA_PATH> is the path where Kafka has been installed in the Kafka Producer machine.

      <EDGE_KAFKA_HOST> is the Hostname shown in step 4.

      <EDGE_KAFKA_PORT> is the TCP/IP Port shown in step 4.

      <EDGE_KAFKA_TOPIC> is the Topic shown in step 4.

      <TRUSTSTORE_PATH> is the path chosen in step 8.

      <CA_PASSWORD> is the contents of the “ca.password” file generated after executing step 2.

      <SASL_MECHANISM> is the SASL Mechanism shown in step 4.

      <SECURITY_PROTOCOL> is the Security Protocol shown in step 4.

      <SASL_USERNAME> is the SASL Plain Username shown in step 4.

      <SASL_PASSWORD> is the SASL Plain Password shown in step 4.

      For example:
      /opt/kafka/bin/kafka-console-producer.sh \
      --bootstrap-server kafka-edge:32092 \
      --topic tca-stages \
      --producer-property ssl.truststore.location=$JAVA/HOME/lib/security/cacerts \
      --producer-property ssl.truststore.password=mypwd \
      --producer-property sasl.mechanism=SCRAM-SHA-512 \
      --producer-property security.protocol=SASL_SSL
      --producer-property 'sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
      username="kafka-scram-sha-512-client-credentials" \
      password="mysaslpwd";' < ~/mysamplefile.txt
    6. Verify if the message(s) were indeed sent to the VMware Telco Cloud Service Assurance Edge Kafka by executing the following command:
      <KAFKA_PATH>/bin/kafka-console-consumer.sh \
      --bootstrap-server <EDGE_KAFKA_HOST>:<EDGE_KAFKA_PORT> \
      --topic <EDGE_KAFKA_TOPIC> \
      --from-beginning \
      --consumer-property ssl.truststore.location=<TRUSTSTORE_PATH> \
      --consumer-property ssl.truststore.password=<CA_PASSWORD> \
      --consumer-property sasl.mechanism=<SASL_MECHANISM> \
      --consumer-property security.protocol=<SECURITY_PROTOCOL> \
      --consumer-property 'sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
      username="<SASL_USERNAME" \
      password="<SASL_PASSWORD>";'
      
      Note: use variables as per instructions in step 5.
      For example:
      /opt/kafka/bin/kafka-console-consumer.sh \
      --bootstrap-server kafka-edge:32092 \
      --topic tca-stages \
      --from-beginning \
      --consumer-property ssl.truststore.location=$JAVA/HOME/lib/security/cacerts \
      --consumer-property ssl.truststore.password=mypwd \
      --consumer-property sasl.mechanism=SCRAM-SHA-512 \
      --consumer-property security.protocol=SASL_SSL  \
      --consumer-property sasl.jaas.config='org.apache.kafka.common.security.scram.ScramLoginModule required \
      username="kafka-scram-sha-512-client-credentials" \
      password="mysaslpwd";'
      
    7. Finally, configure the desired Kafka Producer. If it is a custom application use variables in step 4 from the previous bullet to establish connectivity. If it is a Kafka utility, you can use a configuration file populated with variables in step 4 from the previous bullet according to the utility instructions.
  • If you are using Edge Kafka for ingesting data into VMware Telco Cloud Service Assurance, then perform the following steps from the deployment host.
    1. Install Kafka.
    2. Export KUBECONFIG=<KUBECONFIG-file-location>
      EDGENS=kafka-edge
      kubectl get secret -n $EDGENS $CLUSTER_NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 --decode > ca.crt
      kubectl get secret -n $EDGENS $CLUSTER_NAME-cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 --decode > ca.password
      export CERT_FILE_PATH=ca.crt
      export CERT_PASSWORD_FILE_PATH=ca.password
      export KEYSTORE_LOCATION=cacerts
      export PASSWORD=`cat $CERT_PASSWORD_FILE_PATH`
      export CA_CERT_ALIAS=strimzi-kafka-cert
      keytool -noprompt -importcert -alias $CA_CERT_ALIAS -file $CERT_FILE_PATH -keystore $KEYSTORE_LOCATION -keypass $PASSWORD -storepass $PASSWORD
      export USER_NAME=kafka-scram-sha-512-client-credentials
      export SCRAM_PASSWORD_FILE_PATH=user-scram.password
      kubectl get secret -n $EDGENS $USER_NAME -o jsonpath='{.data.password}' | base64 --decode > $SCRAM_PASSWORD_FILE_PATH
      export SCRAM_PASSWORD=`cat $SCRAM_PASSWORD_FILE_PATH`
      <<KAFKALOCATION>>/bin/kafka-console-producer.sh --broker-list kafka-edge:32092 --producer-property security.protocol=SASL_SSL --producer-property sasl.mechanism=SCRAM-SHA-512 --producer-property ssl.truststore.password=$PASSWORD --producer-property ssl.truststore.location=$PWD/cacerts --producer-property sasl.jaas.config="org.apache.kafka.common.security.scram.ScramLoginModule required username=\"$USER_NAME\" password=\"$SCRAM_PASSWORD\";" --topic metrics < ${5Gdatadump}
  • If you are using external Kafka to ingest data into VMware Telco Cloud Service Assurance, then perform the following steps:
    1. Install Kafka in any RHEL host.
    2. Start Zookeeper.
      ${KafkaInstallLocation}/bin/zookeeper-server-start.sh -daemon ${KafkaInstallLocation}/config/zookeeper.properties
    3. Start Kafka Server.
      ${KafkaInstallLocation}/bin/kafka-server-start.sh -daemon /${KafkaInstallLocation}/config/server.properties
    4. Start Kafka Producer.
      ${KafkaInstallLocation}/bin/kafka-console-producer.sh --bootstrap-server ${kafkahost}:${kafkaport} --topic ${KafkaTopicname} < ${5Gdatadump}

Procedure

  1. Use the default tca-pipeline Kafka Mapper that is configured for the VMware Telco Cloud Automation Pipeline reports.TCAPipelineKafkaMapper

    The following screenshot represents the default tca-pipeline Kafka Mapper for VMware Telco Cloud Automation Pipeline Reports. TCAPipelineKafkaMapper

  2. Configure Kafka Collector. For more information about configuring Kafka Collector, see Configuring the Kafka Collector topic.
    KafkaCollectorPipelineReport KafkaCollectorPipelineReport