阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

Kafka简介及基本原理与使用场景

145次阅读
没有评论

共计 13497 个字符,预计需要花费 34 分钟才能阅读完成。

Apache Kafka 是分布式发布 - 订阅消息系统。它最初由 LinkedIn 公司开发,之后成为 Apache 项目的一部分。Kafka 是一种快速、可扩展的、设计内在就是分布式的,分区的和可复制的提交日志服务。

现在很多开源分布式系统,例如 Flume(数据实时分析),Storm(数据实时处理),Spark(内存数据处理),elasticsearch(全文检索)
几种分布式系统的对比

2017-05-05_095912.png

上图介绍到的动态扩容,kafka 目前是通过 zookeeper 来实现动态扩容的。zookeeper:一个提供分布式状态管理,分布式配置管理,分布式锁服务的集群。

AMQP 协议

kafka 借鉴 AMQP 协议进行开发
基本概念

 
  • 消费者(Consumer):从消息队列中请求消息的客户端应用程序
  • 生产者(Producer):向 Broker 发布消息的客户端应用程序。
  • AMQP 服务器端(Broker):用于接收生产者发送的消息并将消息路由给服务器中队列。
  • 话题(Topic):是特定类型的消息流。消息是字节的有效负载(Payload),话题是消息的分类名或种子(Feed)名。类似新闻中的体育,娱乐,教育等概念。实际应用中往往一个业务一个主题。
  • 分区(Partition):topic 中的消息按照分区来进行组织。其是 kafka 消息队列组织的最小单位,一个分区可以看作一个 FIFO 队列。

     
  • 备份(Replication):为了保证分布式高可靠性,kafka0.8 开始对每个分区数据进行备份,防止一个 Broker 宕机导致分区数据中数据不可用

zookeeper 配置

配置 zookeeper 需要先配置 Java_HOME,注意下 JAVA_HOME 的配置方法如下:

JAVA_HOME=/usr/local/jdk1.8
export PATH=$JAVA_HOME/bin:$PATH

一般将系统的环境变量 PATH 写在后面,因为系统读取环境变量是从前往后找的,如果 PATH 中本来已经配置了 JAVA_HOME,那么将其放在后面可以让我们配置的 JAVA_HOME 优先被读取到。
配置完 java 的环境变量后,需要配置 zookeeper 的 conf/ 目录下的 zoo.cfg 文件。将 zoo_sample.cfg 拷贝为 zoo.cfg 后,修改内容为下:

# The number of milliseconds of each tick
# 即下面计时方式的单位
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
# 20S,即这个时间内,集群中的机器都要启动
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
# leader 发送给 follower 的心跳超时时间
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
# 设置了 zookeeper 的 dataDir 以及 DataLogDir,注意下这两个目录不要设
# 置为同一个,那样的话会影响到 zookeeper 的性能。dataDir=/home/eversilver/silverTest/kafka/zkData
# 这里存放的是 zookeeper 的事务日志,一般很多,需要定期的去清理,否
# 则产生很多垃圾,拖慢响应速度。官方文档只给出了具体的清理方法
dataLogDir=/home/eversilver/silverTest/kafka/zookeeper/zkLog
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
# 设置 zookeeper 的服务器集群,集群一般设置为基数台,当前机器对应
# 的集群 id 为 1,有几台机器就配置几行,这里有三台机器。# 机器名称 = 机器 IP:leader 与 follower 之间通信端口:leader 选举的端口
# 注意,所以集群中的机器上面的端口应该对应相同
server.1=192.168.142.133:12888:13888
server.2=192.168.142.134:12888:13888
server.3=192.168.142.135:12888:13888

cfg 文件配置完成后,在 cfg 文件中的 dataDir 目录下新建一个 myid 文件,代表当前机器的 id,这个 id 与上面 server. 后面的值相同即可:

eversilver@debian:/usr/local/zookeeper$ cat ~/silverTest/kafka/zookeeper/zkData/myid 
1

然后即可启动本机,有几台机器即启动几台:

eversilver@debian:/usr/local/zookeeper$ ./bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
eversilver@debian:/usr/local/zookeeper$ jps
25846 Jps
25822 QuorumPeerMain
eversilver@debian:/usr/local/zookeeper$

这里启动了三台虚拟机上的 zookeeper,可以通过 zkServer.sh status 来查看 zookeeper 运行状态

# debian
eversilver@debian:/usr/local/zookeeper$ ./bin/zkServer.sh status
/usr/local/jdk1.8/bin/java
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower

#debian2
eversilver@debian:/usr/local/zookeeper$ ./bin/zkServer.sh status
/usr/local/jdk1.8/bin/java
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader

#debian3
eversilver@debian:/usr/local/zookeeper$ ./bin/zkServer.sh status
/usr/local/jdk1.8/bin/java
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower

上面三台机器中,debian2 机器被选作了 leader
leader 主要作用是从客户端接受并且响应请求
follower 的主要作用: 从 leader 同步数据,再 leader 关闭时,进行投票选举出集群中的新的 leader

集群搭建中的几个重要的文件分别为:
myid 文件:用于 zookeeper 机器间互相发现彼此
zoo.cfg:集群配置文件
log4j.properties 文件:zk 集群的日志输出文件,同样在 conf 目录下
zkEnv.sh 以及 zkServer.sh:分别用于启动环境配置以及集群启动

注意,/bin 目录下的 zkCleanUp.sh 脚本可以快速的对 zookeeper 生成的日志进行清理,这里使用 crontab 定期执行其来进行清理。
crontab - l 来查看定时任务是否存在,crontab - e 对其进行编辑。这里编辑如下。其中主要需要进行配置的配置项有 broker.id,port,hostname,log.dirs,numpartitions, message.max.bytes,default,replication.factor,replica.fetch.max.bytes 以及 zookeeper.connect

eversilver@debian:/usr/local/zookeeper$ crontab -e
# 分别代表分钟,小时,月,年,星期几,命令选项
0 0 * * 0 /usr/local/zookeeper/zkCleanup.sh

kafka 集群搭建

解压缩完成 kafka 后,打开配置文件 server.properties. 配置完成的的文件如下所示。

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
# 类似 zookeeper 的 myid 字段
broker.id=1

# Switch to enable topic deletion or not, default value is false
#delete.topic.enable=true

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = security_protocol://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092


# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
port=19092
host.name=192.168.128.128
# The number of threads handling network requests
num.network.threads=3

# The number of threads doing disk I/O
# 设置成多个的时候一般下面的 logDir 也设置成多个,这样一个线程处理一个目录,性能会好很多
# 注意下面的多个目录往往以逗号来分割
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server,主要为了提高性能
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
# 这个数不能超过 java 的堆栈大小
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma seperated list of directories under which to store log files
log.dirs=/home/eversilver/silverTest/kafka/kafka/kafkaLogs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
# topic 分区数
num.partitions=2

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion
# 消息失效期,7 天
log.retention.hours=168

# kafka 每条消息存放的最大大小
message.max.byte=5048576
#kafka 集群保存消息的默认份数(副本数)default.replication.factor=2
# 取消息的最大字节数,设置为 5M
replica.fetch.max.bytes=5048576


# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
# 超过这个大小就不再追加文件,而是新启动一个文件
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
# 每隔这么多毫秒查看是否有失效的消息,(上面是 168 小时)。有的话就删除消息
log.retention.check.interval.ms=300000
# 是否启用 log 压缩
log.cleaner.enable=false

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=192.168.128.128:2181,192.168.128.129:2181,192.168.128.130:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

三个机器都配置完成后,可以使用./bin/kafka-server.sh -daemon ./config/servers.properties 命令来打开 kafka。打开命令后使用 jps 查看运行状态如下:

eversilver@debian:/usr/local/kafka$ jps
10273 Jps
8994 QuorumPeerMain
10026 Kafka

下面可以使用一个例子查看配置是否正确。创建一个主题。命令如下:

eversilver@debian:/usr/local/kafka$ ./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 1 --topic test
Created topic "test".

查看当前的主题

eversilver@debian:/usr/local/kafka$ bin/kafka-topics.sh --list --zookeeper localhost:2181
test #当前的 test 主题已经存在

启动一个 kafka 的生产者:

eversilver@debian:/usr/local/kafka$ bin/kafka-console-producer.sh --broker-list 192.168.128.128:19092 --topic test

另一台机器上启动一个 kafka 的消费者

eversilver@debian:/usr/local/kafka$ bin/kafka-console-consumer.sh --bootstrap-server 192.168.128.129:19092 --topic test --from-beginning

当在生产者端输入 ”hello” 后,消费者端也会正常显示 ”hello”
同样可以通过命令查看 topic 的基本信息:

eversilver@debian:/usr/local/kafka$ ./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
Topic:test    PartitionCount:1    ReplicationFactor:1    Configs:
Topic: test    Partition: 0    Leader: 1    Replicas: 1    Isr: 1

kafka 的日志目录下的 server.log 是 kafka 集群的机器进行 leader 切换时产生的日志。state.change.log 日志。controller.log 存放的是 kafka 集群中的 controller 所产生的日志。

zookeeper 运行之后可以使用 ./bin/zkCli.sh -server 127.0.0.1:2181 可以进入 zookeeper 的客户端,ls / 命令显示的 zookeeper 内部状态如下所示。

[zk: 127.0.0.1:2181(CONNECTED) 0] ls /
[admin, brokers, cluster, config, consumers, controller, controller_epoch, isr_change_notification, zookeeper]

上面的所有文件除了 zookeeper 文件夹是 zookeeper 产生的其他均为 kafka 所产生的。可以查看 broker 下面的相关信息。

[zk: 127.0.0.1:2181(CONNECTED) 12] ls /brokers
[ids, seqid, topics]
[zk: 127.0.0.1:2181(CONNECTED) 13] ls /brokers/ids
[1, 2]
[zk: 127.0.0.1:2181(CONNECTED) 14] ls /brokers/ids/1
[]
[zk: 127.0.0.1:2181(CONNECTED) 15] get /brokers/ids/1
{"jmx_port":-1,"timestamp":"1494050585414","endpoints":["PLAINTEXT://192.168.128.128:19092"],"host":"192.168.128.128","version":3,"port":19092}

上面显示了 ids1 的启动时间,端口号,版本

 [zk: 127.0.0.1:2181(CONNECTED) 16] ls /brokers/topics
[__consumer_offsets, my-first-kafka-topic, test]
[zk: 127.0.0.1:2181(CONNECTED) 17] get /brokers/topics/test
{"version":1,"partitions":{"0":[1]}}

上面显示了当前存在的所有的 topic。例如其中的 test topic,只有一个分区,版本是 1.

kafka 的 config 文件夹下的,consumer.properties 配置项如下:

eversilver@debian:/usr/local/kafka$ cat ./config/consumer.properties 
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
# 
#    http://www.apache.org/licenses/LICENSE-2.0
# 
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.consumer.ConsumerConfig for more details

# Zookeeper connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zookeeper.connect=127.0.0.1:2181

# timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

# consumer group id
# 用来组合一个 topic 下面的多个 partition,如果一个 topic 有两个 group-id,那么
# 一个 group-id 对应于一个 consumer 组。不同 consumer 组可以复制消费这个 topic 的
# 消息。即一个 topic 消息可以被两个 group 拿两次。group.id=test-consumer-group

#consumer timeout
#consumer.timeout.ms=5000

kafka 的 config 文件夹下的,producer.properties 配置项也需要注意些。
注意,kafka 的配置项优先级最高的是程序中设置的配置项,其次是 shell 中开始命令时的配置项,最低的是配置文件中设置的配置项。

相关资料:
http://kafka.apache.org/quickstart : kafka 快速开始
http://kafka.apache.org/documentation/#brokerconfigs:broker 配置项以及其他的配置项。

下面关于 Kafka 的文章您也可能喜欢,不妨参考下:

CentOS 7.2 部署 Elasticsearch+Kibana+Zookeeper+Kafka  http://www.linuxidc.com/Linux/2016-11/137636.htm

CentOS 7 下安装 Logstash ELK Stack 日志管理系统  http://www.linuxidc.com/Linux/2016-08/134165.htm

Kafka 集群部署与配置手册 http://www.linuxidc.com/Linux/2017-02/141037.htm

CentOS 7 下 Kafka 集群安装  http://www.linuxidc.com/Linux/2017-01/139734.htm

Apache Kafka 教程笔记 http://www.linuxidc.com/Linux/2014-01/94682.htm

CentOS 7 下安装 Kafka 单机版  http://www.linuxidc.com/Linux/2017-01/139732.htm

Apache kafka 原理与特性(0.8V)  http://www.linuxidc.com/Linux/2014-09/107388.htm

Kafka 部署与代码实例  http://www.linuxidc.com/Linux/2014-09/107387.htm

Kafka 介绍及环境搭建  http://www.linuxidc.com/Linux/2016-12/138724.htm

Kafka 介绍和集群环境搭建  http://www.linuxidc.com/Linux/2014-09/107382.htm

CentOS7.0 安装配置 Kafka 集群  http://www.linuxidc.com/Linux/2017-06/144951.htm

Kafka 的详细介绍:请点这里
Kafka 的下载地址:请点这里

本文永久更新链接地址:http://www.linuxidc.com/Linux/2017-10/147588.htm

正文完
星哥说事-微信公众号
post-qrcode
 
星锅
版权声明:本站原创文章,由 星锅 2022-01-21发表,共计13497字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中