环艺做网站,沐风wordpress,建设购物网站课程设计,自媒体营销模式有哪些kafka集群部署 目录 部署zookeeper准备工作2、部署kafka准备工作3、编辑docker-compose.yml文件4、启动服务5、测试kafka6、web监控管理 部署zookeeper准备工作
mkdir data/zookeeper-{1,2,3}/{data,datalog,logs,conf} -p
cat data/zookeeper-1/conf/zoo.cfgEOF…kafka集群部署 目录 部署zookeeper准备工作2、部署kafka准备工作3、编辑docker-compose.yml文件4、启动服务5、测试kafka6、web监控管理 部署zookeeper准备工作
mkdir data/zookeeper-{1,2,3}/{data,datalog,logs,conf} -p
cat data/zookeeper-1/conf/zoo.cfgEOF
clientPort2181
dataDir/data
dataLogDir/datalog
tickTime2000
initLimit5
syncLimit2
autopurge.snapRetainCount3
autopurge.purgeInterval0
maxClientCnxns60
standaloneEnabledtrue
admin.enableServertrue
#和下面的docker-compose 创建的docker container实例对应
server.1zookeeper-1:2888:3888
server.2zookeeper-2:2888:3888
server.3zookeeper-3:2888:3888
EOFcat data/zookeeper-1/conf/log4j.propertiesEOF
# Copyright 2012 The Apache Software Foundation
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# License); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an AS IS BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# Define some default values that can be overridden by system properties
zookeeper.root.loggerINFO, CONSOLEzookeeper.console.thresholdINFOzookeeper.log.dir/logs
zookeeper.log.filezookeeper.log
zookeeper.log.thresholdINFO
zookeeper.log.maxfilesize256MB
zookeeper.log.maxbackupindex20zookeeper.tracelog.dir${zookeeper.log.dir}
zookeeper.tracelog.filezookeeper_trace.loglog4j.rootLogger${zookeeper.root.logger}#
# console
# Add console to rootlogger above if you want to use this
#
log4j.appender.CONSOLEorg.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold${zookeeper.console.threshold}
log4j.appender.CONSOLE.layoutorg.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}%L] - %m%n#
# Add ROLLINGFILE to rootLogger to get log file output
#
log4j.appender.ROLLINGFILEorg.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE.Threshold${zookeeper.log.threshold}
log4j.appender.ROLLINGFILE.File${zookeeper.log.dir}/${zookeeper.log.file}
log4j.appender.ROLLINGFILE.MaxFileSize${zookeeper.log.maxfilesize}
log4j.appender.ROLLINGFILE.MaxBackupIndex${zookeeper.log.maxbackupindex}
log4j.appender.ROLLINGFILE.layoutorg.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}%L] - %m%n#
# Add TRACEFILE to rootLogger to get log file output
# Log TRACE level and above messages to a log file
#
log4j.appender.TRACEFILEorg.apache.log4j.FileAppender
log4j.appender.TRACEFILE.ThresholdTRACE
log4j.appender.TRACEFILE.File${zookeeper.tracelog.dir}/${zookeeper.tracelog.file}log4j.appender.TRACEFILE.layoutorg.apache.log4j.PatternLayout
### Notice we are including log4js NDC here (%x)
log4j.appender.TRACEFILE.layout.ConversionPattern%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}%L][%x] - %m%n
#
# zk audit logging
#
zookeeper.auditlog.filezookeeper_audit.log
zookeeper.auditlog.thresholdINFO
audit.loggerINFO, RFAAUDIT
log4j.logger.org.apache.zookeeper.audit.Log4jAuditLogger${audit.logger}
log4j.additivity.org.apache.zookeeper.audit.Log4jAuditLoggerfalse
log4j.appender.RFAAUDITorg.apache.log4j.RollingFileAppender
log4j.appender.RFAAUDIT.File${zookeeper.log.dir}/${zookeeper.auditlog.file}
log4j.appender.RFAAUDIT.layoutorg.apache.log4j.PatternLayout
log4j.appender.RFAAUDIT.layout.ConversionPattern%d{ISO8601} %p %c{2}: %m%n
log4j.appender.RFAAUDIT.Threshold${zookeeper.auditlog.threshold}# Max log file size of 10MB
log4j.appender.RFAAUDIT.MaxFileSize10MB
log4j.appender.RFAAUDIT.MaxBackupIndex10EOF2、部署kafka准备工作
3、编辑docker-compose.yml文件
version: 3
# 配置zk集群
# container services下的每一个子配置都对应一个zk节点的docker container# 给zk集群配置一个网络网络名为zookeeper-net
networks:zookeeper-net:name: zookeeper-netdriver: bridgeservices:zookeeper-1:image: zookeepercontainer_name: zookeeper-1restart: always# 配置docker container和宿主机的端口映射ports:- 2181:2181- 8081:8080# 将docker container上的路径挂载到宿主机上 实现宿主机和docker container的数据共享volumes:- ./data/zookeeper-1/data:/data- ./data/zookeeper-1/datalog:/datalog- ./data/zookeeper-1/logs:/logs- ./data/zookeeper-1/conf:/conf# 配置docker container的环境变量environment:# 当前zk实例的idZOO_MY_ID: 1# 整个zk集群的机器、端口列表ZOO_SERVERS: server.1zookeeper-1:2888:3888 server.2zookeeper-2:2888:3888 server.3zookeeper-3:2888:3888networks:- zookeeper-netzookeeper-2:image: zookeeper# 配置docker container和宿主机的端口映射container_name: zookeeper-2restart: alwaysports:- 2182:2181- 8082:8080# 将docker container上的路径挂载到宿主机上 实现宿主机和docker container的数据共享volumes:- ./data/zookeeper-2/data:/data- ./data/zookeeper-2/datalog:/datalog- ./data/zookeeper-2/logs:/logs- ./data/zookeeper-2/conf:/conf# 配置docker container的环境变量environment:# 当前zk实例的idZOO_MY_ID: 2# 整个zk集群的机器、端口列表ZOO_SERVERS: server.1zookeeper-1:2888:3888 server.2zookeeper-2:2888:3888 server.3zookeeper-3:2888:3888networks:- zookeeper-netzookeeper-3:image: zookeepercontainer_name: zookeeper-3restart: always# 配置docker container和宿主机的端口映射ports:- 2183:2181- 8083:8080# 将docker container上的路径挂载到宿主机上 实现宿主机和docker container的数据共享volumes:- ./data/zookeeper-3/data:/data- ./data/zookeeper-3/datalog:/datalog- ./data/zookeeper-3/logs:/logs- ./data/zookeeper-3/conf:/conf# 配置docker container的环境变量environment:# 当前zk实例的idZOO_MY_ID: 3# 整个zk集群的机器、端口列表ZOO_SERVERS: server.1zookeeper-1:2888:3888 server.2zookeeper-2:2888:3888 server.3zookeeper-3:2888:3888networks:- zookeeper-netk1:image: bitnami/kafka:3.2.0restart: alwayscontainer_name: k1user: rootports:- 9092:9092- 9999:9999environment:- KAFKA_CFG_ZOOKEEPER_CONNECTzookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181- ALLOW_PLAINTEXT_LISTENERyes- KAFKA_BROKER_ID0- KAFKA_ADVERTISED_LISTENERSPLAINTEXT://10.10.111.33:9092- KAFKA_LISTENERSPLAINTEXT://0.0.0.0:9092- KAFKA_CFG_NUM_PARTITIONS3- KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR3- KAFKA_HEAP_OPTS-Xmx512M -Xms256M- JMX_PORT9999volumes:- ./data/kafka1:/bitnami/kafka:rwnetworks:- zookeeper-netk2:image: bitnami/kafka:3.2.0restart: alwayscontainer_name: k2user: rootports:- 9093:9092environment:- KAFKA_CFG_ZOOKEEPER_CONNECTzookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181- ALLOW_PLAINTEXT_LISTENERyes- KAFKA_BROKER_ID1- KAFKA_ADVERTISED_LISTENERSPLAINTEXT://10.10.111.33:9093- KAFKA_LISTENERSPLAINTEXT://0.0.0.0:9092- KAFKA_CFG_NUM_PARTITIONS3- KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR3- KAFKA_HEAP_OPTS-Xmx512M -Xms256Mvolumes:- ./data/kafka2:/bitnami/kafka:rwnetworks:- zookeeper-netk3:image: bitnami/kafka:3.2.0restart: alwayscontainer_name: k3user: rootports:- 9094:9092environment:- KAFKA_CFG_ZOOKEEPER_CONNECTzookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181- ALLOW_PLAINTEXT_LISTENERyes- KAFKA_BROKER_ID2- KAFKA_ADVERTISED_LISTENERSPLAINTEXT://10.10.111.33:9094- KAFKA_LISTENERSPLAINTEXT://0.0.0.0:9092- KAFKA_NUM_PARTITIONS3- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR3- KAFKA_HEAP_OPTS-Xmx512M -Xms256Mvolumes:- ./data/kafka3:/bitnami/kafka:rwnetworks:- zookeeper-netkafka-manager:image: hlebalbau/kafka-managerrestart: alwayscontainer_name: kafka-managerhostname: kafka-managernetwork_mode: zookeeper-netports:- 9000:9000environment:ZK_HOSTS: zookeeper-1:2181,zookeeper-2:2181,zookeeper-3:2181KAFKA_BROKERS: k1:9092,k2:9092,k3:9092APPLICATION_SECRET: letmeinKAFKA_MANAGER_AUTH_ENABLED: true # 开启验证KAFKA_MANAGER_USERNAME: admin # 用户名KAFKA_MANAGER_PASSWORD: admin # 密码KM_ARGS: -Djava.net.preferIPv4Stacktrue
kafka配置解释 environment:### 通用配置# 允许使用kraft即Kafka替代Zookeeper- KAFKA_ENABLE_KRAFTyes# kafka角色做broker也要做controller- KAFKA_CFG_PROCESS_ROLESbroker,controller# 指定供外部使用的控制类请求信息- KAFKA_CFG_CONTROLLER_LISTENER_NAMESCONTROLLER# 定义kafka服务端socket监听端口- KAFKA_CFG_LISTENERSPLAINTEXT://:,CONTROLLER://:9093# 定义安全协议- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAPCONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT# 使用Kafka时的集群id集群内的Kafka都要用这个id做初始化生成一个UUID即可- KAFKA_KRAFT_CLUSTER_IDLelMdIFQkiUFvXCEcqRWA# 集群地址- KAFKA_CFG_CONTROLLER_QUORUM_VOTERSkafka11:9093,2kafka22:9093,3kafka33:9093# 允许使用PLAINTEXT监听器默认false不建议在生产环境使用- ALLOW_PLAINTEXT_LISTENERyes# 设置broker最大内存和初始内存- KAFKA_HEAP_OPTS-XmxM -Xms256M # 不允许自动创建主题- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLEfalse
### broker配置# 定义外网访问地址宿主机ip地址和端口- KAFKA_CFG_ADVERTISED_LISTENERSPLAINTEXT://.168.1.54:9292# broker.id必须唯一- KAFKA_BROKER_ID4、启动服务
docker compose up -d5、测试kafka
#进入机器kafka容器内部创建并查看是否存在刚创建的topic如果存在则说明Kafka集群搭建成功。
docker exec -it k1 bash
#创建topic
kafka-topics.sh --create --bootstrap-server 10.10.111.33:9092 --replication-factor 1 --partitions 3 --topic ODSDataSync
#查看topic
kafka-topics.sh --bootstrap-server 10.10.111.33:9092 --describe --topic ODSDataSync
docker exec -it k2 bash注意
ports 端口映射要与“environment”的端口保持一致
6、web监控管理
浏览器访问http://192.168.1.36:9000输入用户名密码即可进入监控页面。
按照下图操作增加对kafka集群的监控