金华集团网站建设,免费空间的个人网站,杭州app开发公司哪家好,咸宁有做网站的吗注#xff1a;本文档部署Kafka时#xff0c;取消了默认的SASL认证的相关配置#xff0c;并开启了数据持久化。
一、添加并更新Helm仓库
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update bitnami二、下载并解压kafka的Chart
helm pull bitna…注本文档部署Kafka时取消了默认的SASL认证的相关配置并开启了数据持久化。
一、添加并更新Helm仓库
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update bitnami二、下载并解压kafka的Chart
helm pull bitnami/kafka
tar -xf kafka-29.3.14.tgz三、修改values.yml
下面为修改后的示例 我已经删除了多余的注释和默认的配置仅保留修改后的必要的内容
image:# 镜像仓库原为docker.io/bitnami/kafka:3.7.1-debian-12-r4registry: registry.cn-hangzhou.aliyuncs.comrepository: zhaoll/kafkatag: 3.7.1-debian-12-r4pullPolicy: IfNotPresent
heapOpts: -Xmx1024m -Xms1024m
listeners:client:containerPort: 9092name: CLIENTprotocol: PLAINTEXT # 原为SASL_PLAINTEXT因为我不需要访问Kafka时进行认证所以改为PLAINTEXTsslClientAuth: none # 原为因为我不需要访问Kafka时进行认证所以改为nonecontroller:name: CONTROLLERcontainerPort: 9093protocol: PLAINTEXT # 同上sslClientAuth: none # 同上interbroker:containerPort: 9094name: INTERNALprotocol: PLAINTEXT # 同上sslClientAuth: none # 同上external:containerPort: 9095name: EXTERNALprotocol: PLAINTEXT # 同上sslClientAuth: none # 同上extraListeners: []overrideListeners: advertisedListeners: securityProtocolMap:
sasl: {} #把sasl下面的大段给注释掉或者删除并将这里改为{}
controller:replicaCount: 3controllerOnly: falseminId: 0zookeeperMigrationMode: falseheapOpts: -Xmx1024m -Xms1024mpersistence:enabled: true # 设为true启用持久化存储storageClass: nfs-client # 这里配置自己的storageClass用于自动创建PV和PVCaccessModes:- ReadWriteOncesize: 1Gi # 大小自己决定mountPath: /bitnami/kafkalogPersistence:enabled: true # 启用日志持久化storageClass: nfs-client # 这里配置自己的storageClass用于自动创建PV和PVCaccessModes:- ReadWriteOncesize: 1Gi # 大小自己决定mountPath: /opt/bitnami/kafka/logsservice:type: NodePort #这里原为ClusterIP我们改为NodePortports:client: 9092controller: 9093interbroker: 9094external: 9095nodePorts:client: external: externalTrafficPolicy: Clusterheadless:controller:annotations: {}labels: {}broker:annotations: {}labels: {}
kraft:enabled: true # 启用Kraft不依赖zookeeper建立集群。existingClusterIdSecret: clusterId: controllerQuorumVoters:
四、开始安装
[rootmaster1 kafka]# helm install kafka -f values.yaml bitnami/kafka大致解释一下上面输出的内容
[rootmaster1 kafka]# helm install kafka -f values.yaml bitnami/kafka
# 安装信息主要是kafka的版本和Chart版本
NAME: kafka
LAST DEPLOYED: Tue Aug 6 16:30:13 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 29.3.14
APP VERSION: 3.7.1** Please be patient while the chart is being deployed **# 可以通过kafka.default.svc.cluster.local名称加上9092端口在k8s集群内部访问访问Kafka集群
Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:kafka.default.svc.cluster.local# 生产者可以通过下面三个节点名称来访问Kafka
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092# 创建一个pod用于访问Kafka
To create a pod that you can use as a Kafka client run the following commands:# 这是创建pod的命令kubectl run kafka-client --restartNever --image registry.cn-hangzhou.aliyuncs.com/zhaoll/kafka:3.7.1-debian-12-r4 --namespace default --command -- sleep infinity# 这是进入pod内部kubectl exec --tty -i kafka-client --namespace default -- bashPRODUCER:# 在Pod里执行下面这行命令开启一个生产者用于发送消息kafka-console-producer.sh \--broker-list kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092 \--topic testCONSUMER:# 在Pod里执行下面这行命令开启一个消费者用于接收消息kafka-console-consumer.sh \--bootstrap-server kafka.default.svc.cluster.local:9092 \--topic test \--from-beginning
# 默认镜像被替换了会在此出现警告
Substituted images detected:- registry.cn-hangzhou.aliyuncs.com/zhaoll/kafka:3.7.1-debian-12-r4按照上面的提示创建一个测试用的Pod
kubectl run kafka-client --restartNever --image registry.cn-hangzhou.aliyuncs.com/zhaoll/kafka:3.7.1-debian-12-r4 --namespace default --command -- sleep infinity安装完成后查看Pod和SVC
[rootmaster1 kafka]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/kafka-client 1/1 Running 0 3m
pod/kafka-controller-0 1/1 Running 0 4m
pod/kafka-controller-1 1/1 Running 0 4m
pod/kafka-controller-2 1/1 Running 0 4m
pod/nfs-client-provisioner-6f5897fd65-28qlw 1/1 Running 0 15dNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kafka NodePort 10.108.113.8 none 9092:32476/TCP 4m
service/kafka-controller-headless ClusterIP None none 9094/TCP,9092/TCP,9093/TCP 4m
service/kubernetes ClusterIP 10.96.0.1 none 443/TCP 18d
六、创建生产者和消费者进行测试
生产者
[rootmaster1 kafka]# kubectl exec --tty -i kafka-client --namespace default -- bash
I have no name!kafka-client:/$ kafka-console-producer.sh \--broker-list kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092 \--topic test
我超级牛逼的消费者
[rootmaster1 kafka]# kubectl exec --tty -i kafka-client --namespace default -- bash
I have no name!kafka-client:/$ kafka-console-consumer.sh \--bootstrap-server kafka.default.svc.cluster.local:9092 \--topic test \--from-beginning
我超级牛逼的以上部署完成