To start the data flow for 5G RAN reports, you must configure the VMware Telco Cloud Service Assurance Edge Kafka or any external Kafka for ingesting the data to VMware Telco Cloud Service Assurance using Kafka to Kafka Collector.
For information about how to view the 5G RAN Reports, see
View 5G RAN Reports in the
VMware Telco Cloud Service Assurance User Guide.
Procedure
- Start the data ingestion.
If you are using Edge Kafka for ingesting data into
VMware Telco Cloud Service Assurance, then perform the following steps from the deployment host.
- Install Kafka.
- Export KUBECONFIG=
<KUBECONFIG-file-location>
EDGENS=kafka-edge
kubectl get secret -n $EDGENS $CLUSTER_NAME-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 --decode > ca.crt
kubectl get secret -n $EDGENS $CLUSTER_NAME-cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 --decode > ca.password
export CERT_FILE_PATH=ca.crt
export CERT_PASSWORD_FILE_PATH=ca.password
export KEYSTORE_LOCATION=cacerts
export PASSWORD=`cat $CERT_PASSWORD_FILE_PATH`
export CA_CERT_ALIAS=strimzi-kafka-cert
keytool -noprompt -importcert -alias $CA_CERT_ALIAS -file $CERT_FILE_PATH -keystore $KEYSTORE_LOCATION -keypass $PASSWORD -storepass $PASSWORD
export USER_NAME=kafka-scram-sha-512-client-credentials
export SCRAM_PASSWORD_FILE_PATH=user-scram.password
kubectl get secret -n $EDGENS $USER_NAME -o jsonpath='{.data.password}' | base64 --decode > $SCRAM_PASSWORD_FILE_PATH
export SCRAM_PASSWORD=`cat $SCRAM_PASSWORD_FILE_PATH`
<<KAFKALOCATION>>/bin/kafka-console-producer.sh --broker-list kafka-edge:32092 --producer-property security.protocol=SASL_SSL --producer-property sasl.mechanism=SCRAM-SHA-512 --producer-property ssl.truststore.password=$PASSWORD --producer-property ssl.truststore.location=$PWD/cacerts --producer-property sasl.jaas.config="org.apache.kafka.common.security.scram.ScramLoginModule required username=\"$USER_NAME\" password=\"$SCRAM_PASSWORD\";" --topic metrics < ${5Gdatadump}
If you are using external Kafka to ingest data into
VMware Telco Cloud Service Assurance, then perform the following steps:
- Install Kafka in any RHEL host.
- Start Zookeeper.
${KafkaInstallLocation}/bin/zookeeper-server-start.sh -daemon ${KafkaInstallLocation}/config/zookeeper.properties
- Start Kafka Server.
${KafkaInstallLocation}/bin/kafka-server-start.sh -daemon /${KafkaInstallLocation}/config/server.properties
- Start Kafka Producer.
${KafkaInstallLocation}/bin/kafka-console-producer.sh --bootstrap-server ${kafkahost}:${kafkaport} --topic ${KafkaTopicname} < ${5Gdatadump}
- Use the default mavenir-metric Kafka Mapper for the 5G RAN reports.
The following screenshot represents the default
mavenir-metric Kafka Mapper for the 5G RAN reports.
- Configure the Kafka Collector. For information about configuring Kafka Collector, see Configuring the Kafka Collector topic.