To upgrade DCF collectors after applying a patch:

Procedure

  1. Upgrade VeloCloud discovery and monitoring collector instances in DCF.
    1. To update the VeloCloud collector instance and to answer the prompts by giving default values while performing update, run the command:

      <DCF_INSTALL_DIRECTORY>/bin/manage-modules.sh update velocloud-sdwan-collect <instance-id>

      Refer Prerequisite section of Patch Upgrade Procedure for version 10.0.0.1 to find all the VeloCloud instance ids.

    2. After update stop the VeloCloud collector instance using the command:

      <DCF_INSTALL_DIRECTORY>/bin/manage-modules.sh service stop collector-manager <instance-id

      To find all the VeloCloud instance ids refer the Prerequisite section of Patch Upgrade Procedure for version 10.0.0.1.

  2. Delete and recreate the VeloCloud discovery and monitoring topics following these steps:
    1. Ensure <KAFKA_HOME>/config/server.properties file has "delete.topic.enable=true"
    2. Execute the folowing commands:

      export KAFKA_OPTS="-Djava.security.auth.login.config=<KAFKA_HOME>/config/zookeeper_jaas.conf"

      <KAFKA_HOME>/bin/kafka-topics.sh --zookeeper <KAFKA_CLUSTER_HOST1_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST2_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST3_IPADDRESS>:2181 --delete --topic <discovery topic name>

    3. Wait for one minute.
    4. Run the command:

      <KAFKA_HOME>/bin/kafka-topics.sh --zookeeper <KAFKA_CLUSTER_HOST1_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST2_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST3_IPADDRESS>:2181 --delete --topic <monitoring topic name>

    5. Wait for one minute.
    6. Execute the following command to ensure the successful deletion of the topics mentioned in steps b and d. The command does not list any topic if the deletion is successful.

      <KAFKA_HOME>/bin/kafka-topics.sh --zookeeper <KAFKA_CLUSTER_HOST1_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST2_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST3_IPADDRESS>:2181 --list

    7. Execute the command:

      export KAFKA_OPTS="-Djava.security.auth.login.config=<KAFKA_HOME>/config/kafka_server_jaas.conf"

    8. Run the command:

      <KAFKA_HOME>/bin/kafka-topics.sh --create --zookeeper <KAFKA_CLUSTER_HOST1_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST2_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST3_IPADDRESS>:2181 --replication-factor 3 --partitions 1 --

      topic <Discovery Topic Name>

      • The value for --replication-factor must be equal to number of nodes in the cluster.
      • At present, only 1 partition is supported per topic.
    9. Wait for 30 seconds.
    10. Execute the command:

      <KAFKA_HOME>/bin/kafka-topics.sh --create --zookeeper <KAFKA_CLUSTER_HOST1_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST2_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST3_IPADDRESS>:2181 --replication-factor 3 --partitions 1 --

      <topic <Monitoring Topic Name>

      • The value for --replication-factor should be equal to number of nodes in the cluster.
      • At present, only 1 partition is supported per topic.
    11. Wait for 30 seconds.