1. Event Processing Manager Configuration: To use SMARTS Event Listener instance, follow the Event Processing Manager steps.

    Configuration file example:
    File: <DCF-Install>/Event-Processing/Event-Processing-Manager/<INSTANCE>(smarts-notifs-events)/conf/processing.xml
    <processing-manager xmlns="http://www.watch4net.com/Events/DefaultProcessingManager" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
            xsi:schemaLocation="http://www.watch4net.com/Events/DefaultProcessingManager DefaultProcessingManager.xsd ">
            <processing-element name="KAFKA" enabled="true" type="Kafka-Event-Adapter" config="Kafka-Event-Adapter/smarts-notifs-events/conf/kafka-event-adapter.xml"/>
            <processing-element name="Smarts" enabled="true" config="Smarts-Listener/smarts-notifs-events/conf/smarts-listener.xml" data="KAFKA"/>
            <processing-element name="EVENT-SPY" enabled="true" type="EventSpy" config="Event-Processing-Utils/smarts-notifs-events/conf"/>
    </processing-manager>
    

    KAFKA processing element deals with publishing the event data from Smarts Processing element in to the Kafka.

    EVENT-SPY processing element helps in debugging the event output from the logs. Event debugging can be enabled by altering the above configuration data tag like below:
    <processing-element name="Smarts" enabled="true" config="Smarts-Listener/smarts-notifs-events/conf/smarts-listener.xml" data="KAFKA EVENT-SPY"/>

    In above configuration, the SMARTS Event Listener forwards events to a Kafka broker and also EVENT-SPY. Since it exclusively listens to events coming from the SMARTS domain managers like SAM, it accepts no input streams.

  2. SMARTS Event Listener Configuration: The following example is the default SMARTS Event Listener configuration:

    File: <DCF-Install>/Event-Processing/Smarts-Listener//<INSTANCE>(smarts-notifs-events)/conf/smarts-listener.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <configuration xmlns="http://www.watch4net.com/Events/SmartsEventListener"
                   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                               xsi:schemaLocation="http://www.watch4net.com/Events/SmartsEventListener smarts-listener.xsd ">
            <resync-period>24h</resync-period>
            <connection-check-period>30s</connection-check-period>
            <connection-timeout>60s</connection-timeout>
            <idle-timeout>240h</idle-timeout>
            <source id="INCHARGE-SA-PRES">
                    <primary id="primary">
                            <broker-hostname>localhost</broker-hostname>
                            <broker-port>4999</broker-port>
                            <domain-name>INCHARGE-SA-PRES</domain-name>
                            <username>admin</username>
                            <password>{1BD4D26A81F980A80617601D0EAC255B85C79E7B064E2672F0CBF9EE8BC251A6D2F68C2751691B568BF7D00DB41E7C61}</password>
                            <notification-list>ICS_NL-ALL_NOTIFICATIONS</notification-list>
                    </primary>
            </source>
    </configuration>
    

    resync-period: Period at which the SMARTS Event Listener will initiate a resync with the SMARTS SAM. A resync operation synchronizes data between the SMARTS Event Listener and the notifications that are currently displayed in the SMARTS SAM console. Setting this value to 0 will disable automatic resynchronization.

    connection-check-period: Period at which the SMARTS Event Listener will check to make sure that its connection with the SMARTS SAM is still valid.

    connection-timeout: Timeout value when trying to establish a connection with the SMARTS SAM.

    idle-timeout: If no new notification is received from the SMARTS SAM after that amount of time, the SMARTS Event Listener will disconnect then reconnect to the SMARTS. This checkup is performed at the same time as the connection check. Therefore, the value of this parameter should always be equal or greater than the connection-check-period.

    source: A source represents a SMARTS SAM instance to monitor. Each source can be composed of a primary source and multiple failover sources, if needed. This is useful for specifying backup SMARTS SAMs in case the primary is down.

    1. primary/failover: Determines if the source is the primary source or just a failover source.
    2. broker-hostname: The broker hostname of the SMARTS SAM if connecting to the broker as your entry point.
    3. broker-port: The port on which the SMARTS SAM’s broker is accepting connections.
    4. * broker-username (optional): The username to use to establish the connection with the broker. This password can be in the encrypted form, generated by the crypt-password script. If no authentication is required, omit this element.
    5. broker-password (optional): The password to use to establish the connection with the broker. This password can be in the encrypted form, generated by the crypt-password script. If no authentication is required, omit this element.
    6. hostname: The hostname of the SMARTS SAM if you need to directly connect to the manager.
    7. port: The port on which the SMARTS SAM is accepting connections.
    8. domain-name: The name of the domain from which notifications will be fetched.
    9. username: The username used to connect to the domain manager.
    10. password: The password used to connect to the domain manager. This password can be in the encrypted form, generated by the crypt-password script.
    11. notification-list: This tag can be repeated as many times as necessary (one line per monitored notification list). It must be present at least once for primary sources, but it is optional for failover sources. If it is committed for a failover source, the notification lists of the corresponding primary source will be used instead.

    Multiple Sources The SMARTS Event Listener can be configured to listen to many sources simultaneously. This is done by adding more source tags in the configuration file. Each source must have its primary source and can have one or more failover sources.

  3. Kafka Event Adapter Configuration: The Kafka Event Adapter is a component that is able to both read events from Kafka and write events to Kafka.

    Kafka server configuration used to publish events from SAM is below:

    File: <DCF-Install>/Event-Processing/Kafka-Event-Adapter/<INSTANCE>(smarts-notifs-events)/conf/kafka-event-adapter.xml)
    <?xml version="1.0"?>
    <kafka-event-adapter-config xmlns="http://www.watch4net.com/KafkaEventAdapter" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.watch4net.com/KafkaEventAdapter ../kafka-event-adapter.xsd ">
      <cluster>
        <server host="localhost" port="9092"/>
        <additional-parameters key="security.protocol">SSL</additional-parameters>
            <additional-parameters key="ssl.truststore.location">../../../Tools/Webservice-Gateway/Default/conf/truststore</additional-parameters>
        <additional-parameters key="ssl.truststore.password">{F871B10293EEB1C941E2EA5466F817546662FD1314591713B73E73A7E39663A9960255C4B844F536409BD410490E007F}</additional-parameters>
        <additional-parameters key="ssl.keystore.location">../../../Tools/Webservice-Gateway/Default/conf/clientkeystore</additional-parameters>
        <additional-parameters key="ssl.keystore.password">{85DF870D632462AF411DB8164B9455741BCCCB1CE493C475B7C121E5CEFA2189A2CDE6CE65466BE4C2E99175CAEFA6F1}</additional-parameters>
        <additional-parameters key="ssl.key.password">{DB17AC06BD3C6420FAA350241DFC43BE4504E0173A8BC2FA0C7FC9D79892374392195CD3EB15BD3D2D914FD470BF7075}</additional-parameters>
      </cluster>
    
        <event-writer topic-name="default-topic" stream="data" isJson="true">
            <kafka-producer-settings>
                <producer compression-type="none" />
            </kafka-producer-settings>
        </event-writer>
    
    </kafka-event-adapter-config>
    

Kafka Event Adapter parameters

Parameter

Description

<cluster>

This tag must occur at least once.

<server>

This tag must occur at least once:

* host: The address of one of the kafka bootstrap servers

* port: The port that the bootstrap server is listening on

<producer>

This tag is optional, but may be used for templating kafka producers. Refer to the schema for more information.

<consumer>

This tag is optional, but may be used for templating kafka consumers. Refer to the schema for more information.

<connection>

This tag is optional, but may be used for templating kafka connections. Refer to the schema for more information.

<additional-parameters>

This tag is optional, but may be used for configuring kafka options outside of the purview of the previous tags. These options include, but are not limited to SSL connection parameters.

Configuring Event Writer parameters

Parameter

Description

<event-writer>

This tag is used to define a component that will write to kafka

topic-name : The topic user will write to.

stream : The event stream to consume from.

isJson: Json format output is enabled

<kafka-producer-settings>

This tag may be used if we want to customize how the writer, writes to kafka. Refer to the schema for more information.

<connector-component-behavior>

This tag may be used if user want to control how often user flush to kafka.

<key-encoder>

This tag may be used if user want to customize how the kafka key is encoded.

<value-encoder>

This tag may be used if user want to customize how the kafka value is encoded.

Configuring Event Reader parameters

Parameter

Description

<event-reader>

This tag is used to define a component that will consumer from kafka.

topic-name : This attribute is used to specify the kafka topic to read from.

stream : This attribute is used to specify the stream to push the events to once they have been read

<kafka-consumer-task>

This tag may be used to customize the way that we will read from kafka.

poll-timeout-ms : The poll timeout for kafka.

error-timeout-ms : The error timeout for connecting to kafka.

error-timout-double-n : How many errors before we double the timeout.

error-timout-max-ms : The maximum error timeout.

<consumer>

This tag may be used to customize how this element will consume from kafka. Of note, is that the attribute group-id must be used here if we plan on joining a kafka consumer group.

<connection> This tag may be used to customize the kafka connection for this element.
<additional-parameters>

This tag may be used to customize the additional-parameters for this element.

<initial-topic-partition-offset-seeker>

This element is used when wanting to control how we seek in a kafka topic.

existing-reset-policy : policy when seeking on existing partition but the requested offset no longer exists. Must be one of earliest or latest.

new-reset-policy : policy when seeking on a new partition. Must be one of earliest or latest.

<topic-partition-offset-commit>

This element must be used if we would like to control how our offsets are committed.

location : where to commit the topic partition offsets. Must be one of disk, kafka, none.

commit-interval : Offsets are committed no more frequently than this interval.

<topic-partition-listener>

Required if kafka-consumer group is not set. Used to refresh the list of topic partitions periodically.

new-reset-policy : where to seek in new partitions. (at earliest or latest offset in the new partition: usually at the earliest).

existing-reset-policy : where to seek in existing partitions when current offset is out of range. (usually at the earliest).

refresh-interval : how often to check for new partitions.

Data Processing

If user want to alter default event data values before pushing to Kafka, they can do so by using below event processers. Each of these processors usage is explained in respective sections:
  1. Event Log processor
  2. Event Property Tagger