zk集群4节点搭建

下载地址, 注意, 现在bin类型的release版本: https://zookeeper.apache.org/releases.html

准备四台主机, 以下是主机名和ip地址 :

  • server1 : 192.168.1.111

  • server2 : 192.168.1.112

  • server3 : 192.168.1.113

  • server4 : 192.168.1.114

zookeeper 基于java语言开发, 所以运行之前首先要保证java环境正常, 并且保证JAVA_HOME环境变量正常.

然后解压下载好的zookeeper安装包到指定路径下:

tar -zxvf apache-zookeeper-3.5.9-bin.tar.gz

进入解压后的路径, 可以看到zookeeper的目录结构如下:

  • bin 包含各种可执行脚本

  • conf zookeeper配置文件

  • lib zookeeper运行的依赖包

  • logs zookeeper的运行日志

进入conf文件夹, 将文件zoo_sample.cfg 拷贝为 zoo.cfg, zoo.cfg, zookeeper启动的默认配置就是zoo.cfg, 内容如下:

# The number of milliseconds of each tick   服务每次心跳间隔时间
tickTime=2000
# The number of ticks that the initial  当follower连接leader时, leader可以接受最多 10 * 2s 的延迟
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between  如果Leader下发数据协作时, 如果在5 * 2s的时间内没有回馈的话, 会被认为有问题
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.   存放zookeeper日志快照等信息
# 不使用默认值, 使用var/zk路径
# dataDir=/tmp/zookeeper
dataDir=/var/zk
# the port at which the clients will connect  供给客户端连接的端口号
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients  允许最大的客户端连接数
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

更改zoo.cfg, 为其添加如下配置, 为zk集群定义节点列表:

server.1=server1:2888:3888
server.2=server2:2888:3888
server.3=server3:2888:3888
server.4=server4:2888:3888
  • 当zk服务启动时, 会先启动3888端口, 多个结点会在3888进行通讯, 选举。

  • 当选举出Leader时, 其他的Follower会主动连接Leader的2888端口。

然后在 /var/zk 路径下创建 myid 文件, 文件内容则为zk集群结点的id号, 比如如果在我 server4这台机器上,就需要创建文件 /var/zk/myid, 文件内容为4:

[root@server4 zk]# cat /var/zk/myid
4

将四台服务器都部署好了后, 即可完成zk集群的搭建.

基于docker-compose搭建本地测试集群

version: '3.1'

services:
  zoo1:
    image: zookeeper
    restart: always
    hostname: zoo1
    ports:
      - 2181:2181
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181

  zoo2:
    image: zookeeper
    restart: always
    hostname: zoo2
    ports:
      - 2182:2181
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181

  zoo3:
    image: zookeeper
    restart: always
    hostname: zoo3
    ports:
      - 2183:2181
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181

最后更新于