Three Node Kafka cluster deployment. Execute all below steps in each node to install Kafka in cluster mode.

Procedure

  1. Edit server.properties file under <KAFKA_HOME>/config directory.
    broker.id=1
    delete.topic.enable=true
    auto.create.topics.enable=true
    default.replication.factor=3 
    security.inter.broker.protocol=SASL_PLAINTEXT
    sasl.mechanism.inter.broker.protocol=PLAIN
    sasl.enabled.mechanisms=PLAIN
    authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
    allow.everyone.if.no.acl.found=true
    listeners=SASL_PLAINTEXT://<KAFKA_CLUSTER_HOST_IPADDRESS>:9092
    advertised.listeners=SASL_PLAINTEXT://<KAFKA_CLUSTER_HOST_IPADDRESS>:9092
    zookeeper.connect=<KAFKA_CLUSTER_HOST1_IPADDRESS>:2181,<KAFKA_CLUSTER_HOST2_IPADDRESS>:2181,<KAFKA_CLUSTER_HOST3_IPADDRESS>:2181
     
    Note: broker.id value should be unique across all cluster nodes. And, 9092 is Kafka broker/server port and can be anything.
  2. Create a new kafka_server_jaas.conf file under <KAFKA_HOME>/config directory with below content.
    KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="kafkaadmin"
    password="kafka-pwd"
    user_kafkaadmin="kafka-pwd";
    };
    Client {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="kafka"
    password="zoo-pwd";
    };
    1. The file kafka_server_jaas.conf is used for authentication.
    2. username="kafkaadmin": kafkaadmin is the username and can be any username.
    3. password="kafka-pwd": kafka-pwd is the password and can be any password.
    4. Both username="kafkaadmin"and password="kafka-pwd"is used for inter broker communication.
    5. user_kafkaadmin="kafka-pwd": kafkaadmin and kafka-pwd are username and passwords used for server client communication and can be anything.
    6. Under Client section, the username="kafka"and password="zoo-pwd" should match username and password provided in zookeeper_jaas.conf file.
  3. Edit consumer.properties file under <KAFKA_HOME>/config directory and the following configurations.
    security.protocol=SASL_PLAINTEXT
    sasl.mechanism=PLAIN
  4. Edit producer.properties file under <KAFKA_HOME>/config directory and the following configurations.
    security.protocol=SASL_PLAINTEXT
    sasl.mechanism=PLAIN
    bootstrap.servers=<KAFKA_CLUSTER_HOST1_IPADDRESS>:9092,<KAFKA_CLUSTER_HOST2_IPADDRESS>:9092,<KAFKA_CLUSTER_HOST3_IPADDRESS>:9092
    compression.type=none
  5. Create a new kafka_client_jaas.conf file under <KAFKA_HOME>/config directory with below content.
    KafkaClient {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="kafkaadmin"
    password="kafka-pwd";
    };
    Client {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="kafka"
    password="zoo-pwd";
    };
    1. Under KafkaClient section, the username="kafkaadmin"and password="kafka-pwd" should match username and password provided in kafka_server_jaas.conf file.
    2. Under Client section, the username="kafka"and password="zoo-pwd"should match username and password provided in zookeeper_jaas.conf file.
  6. Start Kafka broker by running below command:
    KAFKA_OPTS="-Djava.security.auth.login.config=<KAFKA_HOME>/config/kafka_server_jaas.conf" <KAFKA_HOME>/bin/kafka-server-start.sh -daemon <KAFKA_HOME>/config/server.properties
  7. Check the status of kafka broker service by running any one of below 3 commands:
    • ps -aef | grep -v zookeeper | grep kafka or ps -aef | grep server.properties
    • lsof -i:9092
    • netstat -tnlup | grep 9092
  8. Create a topic after executing steps from 1 to 7 on all 3 cluster nodes. The command to create topic can be executed from any one cluster node.

    <KAFKA_HOME>/bin/kafka-topics.sh --create --zookeeper <KAFKA_CLUSTER_HOST1_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST2_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST3_IPADDRESS>:2181 --replication-factor 3 --partitions 1 --topic <Topic Name>

    1. The value for --replication-factor should be equal to number of nodes in the cluster.
    2. At present, only 1 partition is supported per topic.
  9. Command to list/verify topic:
    <KAFKA_HOME>/bin/kafka-topics.sh --list --zookeeper <KAFKA_CLUSTER_HOST1_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST2_IPADDRESS>:2181 <KAFKA_CLUSTER_HOST3_IPADDRESS>::2181
  10. Command to start consumer and attach to topic. This command should be used only for debugging purpose to read the content of the topic from beginning. The output can be redirected to a file.
    <KAFKA_HOME>/bin/kafka-console-consumer.sh --bootstrap-server <KAFKA_CLUSTER_HOST1_IPADDRESS>:9092 <KAFKA_CLUSTER_HOST2_IPADDRESS >:9092 <KAFKA_CLUSTER_HOST3_IPADDRESS>:9092 --topic <Topic Name> --from-beginning --consumer.config=<KAFKA_HOME>/config/consumer.properties