Kafka 集群安装

Kafka 集群安装

kafka集群环境搭建

前提

  • 已安装jdk
  • 已安装zookeeper并保证zk服务正常启动

下载安装包并解压

通过以下地址进行下载安装包
hadoop01执行以下命令,下载并解压

cd /export/softwares
wget http://archive.apache.org/dist/kafka/1.0.0/kafka_2.11-1.0.0.tgz
tar –zxvf  kafka_2.11-1.0.0.tgz -C /export/servers/

hadoop01修改kafka配置文件

hadoop01执行以下命令

mkdir -p  /export/servers/kafka_2.11-1.0.0/logs 
cd /export/servers/kafka_2.11-1.0.0/config
vim server.properties
	broker.id=0
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
	log.dirs=/export/servers/kafka_2.11-1.0.0/logs
num.partitions=2
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
	zookeeper.connect=hadoop01:2181,hadoop02:2181,hadoop03:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
	delete.topic.enable=true
	host.name=hadoop01

kafka分发到其他节点

node01执行以下命令,将node01服务器的kafka安装包发送到node02和node03服务器上面去

cd /export/servers/
scp -r kafka_2.11-1.0.0/ hadoop02:$PWD
scp -r kafka_2.11-1.0.0/ hadoop03:$PWD

hadoop02与hadoop03修改配置文件

node02使用以下命令修改kafka配置文件

cd /export/servers/kafka_2.11-1.0.0/config
vim server.properties
	broker.id=1
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/export/servers/kafka_2.11-1.0.0/logs
num.partitions=2
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
	zookeeper.connect=hadoop01:2181,hadoop02:2181,hadoop03:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true
	host.name=hadoop02

node03使用以下命令修改kafka配置文件

cd /export/servers/kafka_2.11-1.0.0/config
vim server.properties
	broker.id=2
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/export/servers/kafka_2.11-1.0.0/logs
num.partitions=2
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
	zookeeper.connect=hadoop01:2181,hadoop02:2181,hadoop03:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true
	host.name=hadoop03

kafka集群启动与停止

注意事项:在kafka启动前,先启动zookeeper。

三个节点分别执行以下命令将kafka进程启动在后台

cd /export/servers/kafka_2.11-1.0.0
nohup bin/kafka-server-start.sh config/server.properties 2>&1 &

三个节点分别执行以下命令停止kafka集群

cd /export/servers/kafka_2.11-1.0.0
bin/kafka-server-stop.sh