做耳机套的网站,小装修网站开发费用,免费手机建网站有哪些软件,WordPress 去掉副标题Kubernetes二进制集群部署 文章目录 Kubernetes二进制集群部署资源列表基础环境一、环境准备1.1、绑定映射关系1.2、所有主机安装Docker1.3、所有主机设置iptables防火墙 二、生成通信加密证书2.1、master上成功CA证书2.2.1、创建证书存放位置并安装证书生成工具2.2.2、拷贝证书…Kubernetes二进制集群部署 文章目录 Kubernetes二进制集群部署资源列表基础环境一、环境准备1.1、绑定映射关系1.2、所有主机安装Docker1.3、所有主机设置iptables防火墙 二、生成通信加密证书2.1、master上成功CA证书2.2.1、创建证书存放位置并安装证书生成工具2.2.2、拷贝证书生成脚本2.2.3、生成CA证书 2.2、master上生成Server证书2.3、master上生成admin证书2.4、master上生成proxy证书2.5、查看所有证书 三、master上部署Etcd集群3.1、部署etcd基础环境3.2、在master主机上部署Etcd节点3.3、拷贝Etcd启动所依赖的证书3.4、启动Etcd主节点 四、在node1、node2主机上部署Etcd节点4.1、拷贝Etcd配置文件到node节点4.2、拷贝启动脚本文件4.3、分别启动node1、node2节点上的Etcd4.4、master查看Etcd集群部署状况 五、部署Flannel网络5.1、分配子网到Etcd5.2、配置Flannel5.3、配置Flanneld启动脚本5.4、配置Docker启动指定网段5.5、启动Flannel5.6、测试Flanneld是否安装成功 六、部署Kubernetes-master组件v1.18.206.1、添加kubectl命令环境6.2、master上创建TLS Bootstrapping Token6.3、master创建kubelet kubeconfig6.3.1、master上设置集群参数6.3.2、master上设置客户端认证参数6.3.3、master上设置上下文参数6.3.4、master上设置默认上下文 6.4、master上创建kuby-proxy kubeconfig6.5、master上部署Kube-apiserver6.6、master上部署Kube-controller-manager6.7、master上部署kube-scheduler6.8、master上检测组件运行是否正常 七、部署Kubernetes-node组件7.1、准备环境k8s-master7.2、node1和node2部署kube-kubelet7.3、node1和node2部署kube-proxy7.4、查看Node1和Node2节点组件是否安装成功 八、查看自动签发证书 资源列表
操作系统配置主机名IP所需软件角色分配CentOS 7.92C4Gk8s-master192.168.93.101Docker CEMaster kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、EtcdCentOS 7.92C4Gk8s-node1192.168.93.101Docker CENode kubectl、kube-proxy、Flannel、EtcdCentOS 7.92C4Gk8s-node2192.168.93.102Docker CENode kubectl、kube-proxy、Flannel、Etcd
基础环境
关闭防火墙
systemctl stop firewalld
systemctl disable firewalld关闭内核安全机制
setenforce 0
sed -i s/^SELINUX.*/SELINUXdisabled/g /etc/selinux/config修改主机名
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2一、环境准备
三台主机都要操作以k8s-master为例进行演示
1.1、绑定映射关系
[rootk8s-master ~]# cat /etc/hosts EOF
192.168.93.101 k8s-master
192.168.93.102 k8s-node1
192.168.93.103 k8s-node2
EOF1.2、所有主机安装Docker
在所有主机上安装并配置Docker以k8s-master主机为例进行演示操作
# 安装Docker依赖环境并安装常用软件
[rootk8s-master ~]# yum -y install iptable* wget telnet lsof vim rsync lrzsz net-tools unzip yum-utils device-mapper-persistent-data lvm2# 添加阿里云YUM源
[rootk8s-master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# 快速清理构建yum缓存
[rootk8s-master ~]# yum makecache fast# 安装最新版docker可以自定义版本但是尽量保持与kubernetes版本兼容
[rootk8s-master ~]# yum -y install docker-ce# 配置Docker加速器
[rootk8s-master ~]# cd /etc/docker/
[rootk8s-master docker]# cat daemon.json EOF
{
registry-mirrors: [https://8xpk5wnt.mirror.aliyuncs.com]
}
EOF
[rootk8s-master docker]# systemctl restart docker1.3、所有主机设置iptables防火墙
K8S创建容器时需要生成iptables规则需要将CentOS 7.9默认的Firewalld换成iptables。前面已经关闭了。在所有主机上设置防火墙下面以k8s-master主机为例进行操作
[rootk8s-master ~]# systemctl start iptables
[rootk8s-master ~]# systemctl enable iptables# 先清空所有规则
[rootk8s-master ~]# iptables -F# 设置规则放行源地址为192.168.93.0/24网段的IP
[rootk8s-master ~]# iptables -I INPUT -s 192.168.93.0/24 -j ACCEPT二、生成通信加密证书
kubernetes系统各组件之间需要使用TLS证书对通信进行加密。本次实验使用CloudFlare的PKI工具集CFSSL来生成Certificate Authority证书办法机构和其他证书
2.1、master上成功CA证书
2.2.1、创建证书存放位置并安装证书生成工具
[rootk8s-master ~]# mkdir -p /root/software/ssl
[rootk8s-master ~]# cd /root/software/ssl# 下载证书颁发二进制文件
[rootk8s-master ssl]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[rootk8s-master ssl]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[rootk8s-master ssl]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64这是一个工具用于显示TLS证书的信息
cfssl_linux-amd64这是CFSSL的主程序用于指定多种与TLS证书相关的任务
cfssljson_linux-amd64这是一个辅助工具用于解析和转换CFSSL生成的JSON输出
certinfo_linux-amd64这是一个工具用于显示TLS证书的信息# 下载完后设置执行权限
[rootk8s-master ssl]# chmod x *# 移动文件到bash环境中目的是为了可以更好的使用TLS
[rootk8s-master ssl]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[rootk8s-master ssl]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[rootk8s-master ssl]# mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo# 如果可以查看到命令帮助证明以上步骤没有问题
[rootk8s-master ssl]# cfssl --help
Usage:
Available commands:bundleserveocspsignscaninfogencertgencrlocsprefreshprint-defaultsversiongenkeyselfsignrevokecertinfosignocspdumpocspserve
Top-level flags:-allow_verification_with_non_compliant_keysAllow a SignatureVerifier to use keys which are technically non-compliant with RFC6962.-loglevel intLog level (0 DEBUG, 5 FATAL) (default 1)2.2.2、拷贝证书生成脚本
# 注意注意注意下面不要把中文也复制进去
[rootk8s-master ssl]# cat ca-config.jsonEOF
{signing: {default: {expiry: 87600h #有效期10年},profiles: {kubernetes: {usages: [signing,key encipherment,server auth,client auth],expiry: 87600h}}}}
EOF# 创建ca-csr.json
[rootk8s-master ssl]# cat ca-csr.jsonEOF{CN: kubernetes,key: {algo: rsa,size: 2048},names: [{C: CN,ST: BeiJing,L: BeiJing,O: k8s,OU: seven}]}
EOF2.2.3、生成CA证书
[rootk8s-master ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2024/06/17 08:44:06 [INFO] generating a new CA key and certificate from CSR
2024/06/17 08:44:06 [INFO] generate received request
2024/06/17 08:44:06 [INFO] received CSR
2024/06/17 08:44:06 [INFO] generating key: rsa-2048
2024/06/17 08:44:06 [INFO] encoded CSR
2024/06/17 08:44:06 [INFO] signed certificate with serial number 31773994617471314293338378600965746806312495772# 将会生成以下三个文件
[rootk8s-master ssl]# ls ca.csr ca-key.pem ca.pem
ca.csr ca-key.pem ca.pem2.2、master上生成Server证书
执行以下操作创建kubernetes-csr.json文件并生成Server证书。文件中配置的IP地址是使用该整数的主机IP地址根据实际的实验环境填写。其中10.10.10.1是kubernetes自带的Service
[rootk8s-master ssl]# cat server-csr.jsonEOF{CN: kubernetes,hosts: [127.0.0.1,192.168.93.101, 192.168.93.102,192.168.93.103,10.10.10.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local],key: {algo: rsa,size: 2048},names: [{C: CN,ST: BeiJing,L: BeiJing,O: k8s,OU: System}]}
EOF[rootk8s-master ssl]# cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes server-csr.json | cfssljson -bare server
2024/06/17 08:51:05 [INFO] generate received request
2024/06/17 08:51:05 [INFO] received CSR
2024/06/17 08:51:05 [INFO] generating key: rsa-2048
2024/06/17 08:51:05 [INFO] encoded CSR
2024/06/17 08:51:05 [INFO] signed certificate with serial number 361919584713194846624395018455738888079285309498
2024/06/17 08:51:05 [WARNING] This certificate lacks a hosts field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 (Information Requirements).# 将会生成以下两个文件
[rootk8s-master ssl]# ls server.pem server-key.pem
server-key.pem server.pem2.3、master上生成admin证书
执行以下操作创建admin-csr.json文件并生成admin证书admin证书是用于管理员访问集群的证书
[rootk8s-master ssl]# cat admin-csr.jsonEOF{CN: admin,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,ST: BeiJing,L: BeiJing,O: system:masters,OU: System}]}
EOF[rootk8s-master ssl]# cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes admin-csr.json | cfssljson -bare admin
2024/06/17 08:56:36 [INFO] generate received request
2024/06/17 08:56:36 [INFO] received CSR
2024/06/17 08:56:36 [INFO] generating key: rsa-2048
2024/06/17 08:56:37 [INFO] encoded CSR
2024/06/17 08:56:37 [INFO] signed certificate with serial number 419960426771620973555812946181892852252644702353
2024/06/17 08:56:37 [WARNING] This certificate lacks a hosts field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 (Information Requirements).2.4、master上生成proxy证书
执行以下操作创建kube-proxy-csr.json文件并生成证书
[rootk8s-master ssl]# cat kube-proxy-csr.jsonEOF{CN: system:kube-proxy,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,ST: BeiJing,L: BeiJing,O: k8s,OU: System}]}
EOF[rootk8s-master ssl]# cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2024/06/17 09:00:24 [INFO] generate received request
2024/06/17 09:00:24 [INFO] received CSR
2024/06/17 09:00:24 [INFO] generating key: rsa-2048
2024/06/17 09:00:24 [INFO] encoded CSR
2024/06/17 09:00:24 [INFO] signed certificate with serial number 697976605336178060740045394552232520913457109224
2024/06/17 09:00:24 [WARNING] This certificate lacks a hosts field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 (Information Requirements).2.5、查看所有证书
[rootk8s-master ssl]# ll
总用量 68
-rw-r--r-- 1 root root 1009 6月 17 08:56 admin.csr
-rw-r--r-- 1 root root 229 6月 17 08:53 admin-csr.json
-rw------- 1 root root 1675 6月 17 08:56 admin-key.pem
-rw-r--r-- 1 root root 1399 6月 17 08:56 admin.pem
-rw-r--r-- 1 root root 297 6月 17 08:40 ca-config.json
-rw-r--r-- 1 root root 1001 6月 17 08:44 ca.csr
-rw-r--r-- 1 root root 207 6月 17 08:39 ca-csr.json
-rw------- 1 root root 1679 6月 17 08:44 ca-key.pem
-rw-r--r-- 1 root root 1354 6月 17 08:44 ca.pem
-rw-r--r-- 1 root root 1009 6月 17 09:00 kube-proxy.csr
-rw-r--r-- 1 root root 230 6月 17 08:58 kube-proxy-csr.json
-rw------- 1 root root 1675 6月 17 09:00 kube-proxy-key.pem
-rw-r--r-- 1 root root 1399 6月 17 09:00 kube-proxy.pem
-rw-r--r-- 1 root root 1261 6月 17 08:51 server.csr
-rw-r--r-- 1 root root 490 6月 17 08:49 server-csr.json
-rw------- 1 root root 1679 6月 17 08:51 server-key.pem
-rw-r--r-- 1 root root 1627 6月 17 08:51 server.pem# 统计整数个数
[rootk8s-master ssl]# ls -l | wc -l
18三、master上部署Etcd集群
3.1、部署etcd基础环境
# 创建配置文件目录
[rootk8s-master ssl]# mkdir /opt/kubernetes
[rootk8s-master ssl]# mkdir /opt/kubernetes/{bin,cfg,ssl}
[rootk8s-master ssl]# ls /opt/kubernetes/
bin cfg ssl# 上传etcd-v3.3.18-linux-adm64.tar.gz软件包并执行以下操作解压etcd软件包并拷贝二进制bin文件
[rootk8s-master ~]# tar -zxvf etcd-v3.4.3-linux-amd64.tar.gz
[rootk8s-master ~]# cd etcd-v3.4.3-linux-amd64/
[rootk8s-master etcd-v3.4.3-linux-amd64]# mv etcd /opt/kubernetes/bin/
[rootk8s-master etcd-v3.4.3-linux-amd64]# mv etcdctl /opt/kubernetes/bin/3.2、在master主机上部署Etcd节点
# 创建Etcd配置文件
[rootk8s-master etcd-v3.4.3-linux-amd64]# vim /opt/kubernetes/cfg/etcd
#[Member]
ETCD_NAMEetcd01
ETCD_DATA_DIR/var/lib/etcd/default.etcd
ETCD_LISTEN_PEER_URLShttps://192.168.93.101:2380 # master的ip
ETCD_LISTEN_CLIENT_URLShttps://192.168.93.101:2379 # master的ip#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.93.101:2380 # master的ip
ETCD_ADVERTISE_CLIENT_URLShttps://192.168.93.101:2379 # master的ip
ETCD_INITIAL_CLUSTERetcd01https://192.168.93.101:2380,etcd02https://192.168.93.102:2380,etcd03https://192.168.93.103:2380
ETCD_INITIAL_CLUSTER_TOKENetcd-cluster
ETCD_INITIAL_CLUSTER_STATEnew
ETCD_ENABLE_V2true# 配置说明
ETCD_NAME节点名称集群中唯一
ETCD_DATA_DIR数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_INITIAL_CLUSTER客户端访问监听地址
ETCD_INITIALCLUSTER_TOKEN集群Token
ETCD_INITIALCLUSTER加入集群的状态new是新集群、existing表示加入已有季芹
etcd使用的2各默认端口号2379和23802379用于客户端通信、2380用于集群中的peer通信# 创建脚本配置文件
[rootk8s-master etcd-v3.4.3-linux-amd64]# vim /usr/lib/systemd/system/etcd.service
[Unit]
DescriptionEtcd Server
Afternetwork.target
Afternetwork-online.target
Wantsnetwork-online.target[Service]
Typenotify
EnvironmentFile-/opt/kubernetes/cfg/etcd
ExecStart/opt/kubernetes/bin/etcd --cert-file/opt/kubernetes/ssl/server.pem \
--key-file/opt/kubernetes/ssl/server-key.pem --peer-cert-file/opt/kubernetes/ssl/server.pem \
--peer-key-file/opt/kubernetes/ssl/server-key.pem --trusted-ca-file/opt/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file/opt/kubernetes/ssl/ca.pem
Restarton-failure
LimitNOFILE65535[Install]
WantedBymulti-user.target3.3、拷贝Etcd启动所依赖的证书
[rootk8s-master etcd-v3.4.3-linux-amd64]# cd /root/software/ssl/
[rootk8s-master ssl]# cp server*.pem ca*.pem /opt/kubernetes/ssl/3.4、启动Etcd主节点
启动Etcd主节点。若主节点启动卡顿直接ctrl c终止即可。实际上进程已经启动在连接另外两个节点时会超时因为另外两个节点尚未启动
[rootk8s-master ssl]# systemctl daemon-reload
[rootk8s-master ssl]# systemctl start etcd# 查看Etcd启动结果
[rootk8s-master ssl]# ps -ef | grep etcd
root 10294 1 1 09:22 ? 00:00:00 /opt/kubernetes/bin/etcd --cert-file/opt/kubernetes/ssl/server.pem --key-file/opt/kubernetes/ssl/server-key.pem --peer-cert-file/opt/kubernetes/ssl/server.pem --peer-key-file/opt/kubernetes/ssl/server-key.pem --trusted-ca-file/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file/opt/kubernetes/ssl/ca.pem
root 10314 8206 0 09:23 pts/1 00:00:00 grep --colorauto etcd四、在node1、node2主机上部署Etcd节点
4.1、拷贝Etcd配置文件到node节点
拷贝Etcd配置文件到计算节点主机node然后修改对应的主机IP地址
# node1
[rootk8s-master ~]# rsync -avcz /opt/kubernetes/* 192.168.93.102:/opt/kubernetes/
The authenticity of host 192.168.93.102 (192.168.93.102) cant be established.
ECDSA key fingerprint is SHA256:ulREvG0hrcgiCcK7Tcbvp0jxe7GDM8ZthK7bU3fMM.
ECDSA key fingerprint is MD5:4b:84:94:c0:62:22:76:ed:26:24:8e:46:c9:1e:03:85.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 192.168.93.102 (ECDSA) to the list of known hosts.
root192.168.93.102s password:
sending incremental file list
created directory /opt/kubernetes
bin/
bin/etcd
bin/etcdctl
cfg/
cfg/etcd
ssl/
ssl/ca-key.pem
ssl/ca.pem
ssl/server-key.pem
ssl/server.pemsent 14,575,642 bytes received 199 bytes 2,650,152.91 bytes/sec
total size is 41,261,661 speedup is 2.83[rootk8s-node1 ~]# vim /opt/kubernetes/cfg/etcd
#[Member]
ETCD_NAMEetcd02
ETCD_DATA_DIR/var/lib/etcd/default.etcd
ETCD_LISTEN_PEER_URLShttps://192.168.93.102:2380 # node1的ip
ETCD_LISTEN_CLIENT_URLShttps://192.168.93.102:2379 # node1的ip#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.93.102:2380 # node1的ip
ETCD_ADVERTISE_CLIENT_URLShttps://192.168.93.102:2379 # node1的ip
ETCD_INITIAL_CLUSTERetcd01https://192.168.93.101:2380,etcd02https://192.168.93.102:2380,etcd03https://192.168.93.103:2380
ETCD_INITIAL_CLUSTER_TOKENetcd-cluster
ETCD_INITIAL_CLUSTER_STATEnew
ETCD_ENABLE_V2true# node2
[rootk8s-master ~]# rsync -avcz /opt/kubernetes/* 192.168.93.103:/opt/kubernetes/
The authenticity of host 192.168.93.103 (192.168.93.103) cant be established.
ECDSA key fingerprint is SHA256:MX4r8MbdCPXnCrc8F/0Xlp5eL3B3zSGVdwumifPLV4.
ECDSA key fingerprint is MD5:c5:20:5c:c7:de:ab:51:79:a7:0c:e6:d9:36:60:6c:14.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 192.168.93.103 (ECDSA) to the list of known hosts.
root192.168.93.103s password:
sending incremental file list
created directory /opt/kubernetes
bin/
bin/etcd
bin/etcdctl
cfg/
cfg/etcd
ssl/
ssl/ca-key.pem
ssl/ca.pem
ssl/server-key.pem
ssl/server.pemsent 14,575,642 bytes received 199 bytes 2,242,437.08 bytes/sec
total size is 41,261,661 speedup is 2.83
[rootk8s-node2 ~]# vim /opt/kubernetes/cfg/etcd
#[Member]
ETCD_NAMEetcd03
ETCD_DATA_DIR/var/lib/etcd/default.etcd
ETCD_LISTEN_PEER_URLShttps://192.168.93.103:2380 # node2的ip
ETCD_LISTEN_CLIENT_URLShttps://192.168.93.103:2379 # node2的ip#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.93.103:2380 # node2的ip
ETCD_ADVERTISE_CLIENT_URLShttps://192.168.93.103:2379 # node2的ip
ETCD_INITIAL_CLUSTERetcd01https://192.168.93.101:2380,etcd02https://192.168.93.102:2380,etcd03https://192.168.93.103:2380
ETCD_INITIAL_CLUSTER_TOKENetcd-cluster
ETCD_INITIAL_CLUSTER_STATEnew
ETCD_ENABLE_V2true4.2、拷贝启动脚本文件
[rootk8s-master ~]# scp /usr/lib/systemd/system/etcd.service root192.168.93.102:/usr/lib/systemd/system/etcd.service[rootk8s-master ~]# scp /usr/lib/systemd/system/etcd.service root192.168.93.103:/usr/lib/systemd/system/etcd.service4.3、分别启动node1、node2节点上的Etcd
[rootk8s-node1 ~]# systemctl start etcd
[rootk8s-node1 ~]# systemctl enable etcd[rootk8s-node2 ~]# systemctl start etcd
[rootk8s-node2 ~]# systemctl enable etcd4.4、master查看Etcd集群部署状况
# 为Etcd命令添加全局环境变量所有节点都要执行
[rootk8s-master ~]# echo export PATH$PATH:/opt/kubernetes/bin /etc/profile
[rootk8s-master ~]# source /etc/profile# master上查看Etcd集群部署状况
[rootk8s-master ssl]# etcdctl --cacert/opt/kubernetes/ssl/ca.pem --cert/opt/kubernetes/ssl/server.pem --key/opt/kubernetes/ssl/server-key.pem --endpointshttps://192.168.93.101:2379,https://192.168.93.102:2379,https://192.168.93.103:2379 endpoint health
https://192.168.93.101:2379 is healthy: successfully committed proposal: took 6.553155ms
https://192.168.93.103:2379 is healthy: successfully committed proposal: took 7.28756ms
https://192.168.93.102:2379 is healthy: successfully committed proposal: took 8.022626ms#问题排查
less /var/log/message
journalctl -u etcd五、部署Flannel网络
Flannel是Overlay网络的一种也是将源数据包封装在另一种网络包里面进行路由转发和通信目前已经支持UDP、VXLAN、AWS、VPC和GCE路由等数据转发方式。多主机容器网络通信的其他主流方案包括隧道方案weave、Openswitch、路由方案Calico等
5.1、分配子网到Etcd
在主节点写入分配子网段到Etcd供Flanneld使用
# 将etcd版本设置为v2因为版本之间的命令是有差距的本次使用v2版本
[rootk8s-master ~]# export ETCDCTL_API2
# 如果能够过滤出set命令表示设置成功
[rootk8s-master ~]# etcdctl --help | grep set-w, --write-outsimple set the output format (fields, json, protobuf, simple, table)# 分配子网
[rootk8s-master ssl]# etcdctl --ca-file/opt/kubernetes/ssl/ca.pem --cert-file/opt/kubernetes/ssl/server.pem --key-file/opt/kubernetes/ssl/server-key.pem --endpointshttps://192.168.93.101:2379,https://192.168.93.102:2379,https://192.168.93.103:2379 set /coreos.com/network/config {Network:172.17.0.0/16,Backend:{Type:vxlan} }
# 以下是回显信息
{Network:172.17.0.0/16,Backend:{Type:vxlan} }# 上传 flannel-v0.12.0-linux-amd64.tar.gz软件包并解压Flannel二进制并分别拷贝到Node节点
[rootk8s-master ~]# tar -zxvf flannel-v0.12.0-linux-amd64.tar.gz
[rootk8s-master ~]# scp flanneld mk-docker-opts.sh root192.168.93.102:/opt/kubernetes/bin/
[rootk8s-master ~]# scp flanneld mk-docker-opts.sh root192.168.93.103:/opt/kubernetes/bin/5.2、配置Flannel
在k8s-node1与k8s-node2主机上分别编辑flanneld配置文件。下面以k8s-node1为例进行操作演示
[rootk8s-node1 ~]# vim /opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS--etcd-endpointshttps://192.168.93.101:2379,https://192.168.93.102:2379,https://192.168.93.103:2379 \
-etcd-cafile/opt/kubernetes/ssl/ca.pem \
-etcd-certfile/opt/kubernetes/ssl/server.pem -etcd-keyfile/opt/kubernetes/ssl/server-key.pem5.3、配置Flanneld启动脚本
在k8s-node1与k8s-node2主机上分别创建flanneld.service脚本文件管理Flanneld下面以k8s-node1为例进行演示
[rootk8s-node1 ~]# cat /usr/lib/systemd/system/flanneld.service EOF
[Unit]
DescriptionFlanneld overlay address etcd agent
Afternetwork-online.target network.target
Beforedocker.service[Service]
Typenotify
EnvironmentFile/opt/kubernetes/cfg/flanneld
ExecStart/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restarton-failure[Install]
WantedBymulti-user.target
EOF5.4、配置Docker启动指定网段
在k8s-node1与k8s-node2主机主机上配置Docker启动指定网段修改Docker配置脚本文件下面以k8s-node1为例进行操作演示
[rootk8s-node1 ~]# vim /usr/lib/systemd/system/docker.service
[Service]
# 在service下添加目的是让Docker网段分发的ip地址与flanned网桥在同一个网段
EnvironmentFile/run/flannel/subnet.env
# 在原有的基础上进行修改添加$DOCKER_NETWORK_OPTIONSbian变量替换原来的ExecStart目的是调用Flannel网桥IP地址
ExecStart/usr/bin/dockerd -D $DOCKER_NETWORK_OPTIONSbian5.5、启动Flannel
启动k8s-node1主机上的Flanneld服务
[rootk8s-node1 ~]# systemctl start flanneld
[rootk8s-node1 ~]# systemctl enable flanneld
[rootk8s-node1 ~]# systemctl daemon-reload
[rootk8s-node1 ~]# systemctl restart docker
# 查看Flannel是否与Docker在同一网段
[rootk8s-node1 ~]# ifconfig
docker0: flags4099UP,BROADCAST,MULTICAST mtu 1450inet 172.17.76.1 netmask 255.255.255.0 broadcast 172.17.76.255ether 02:42:f2:eb:89:58 txqueuelen 0 (Ethernet)RX packets 0 bytes 0 (0.0 B)RX errors 0 dropped 0 overruns 0 frame 0TX packets 0 bytes 0 (0.0 B)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags4163UP,BROADCAST,RUNNING,MULTICAST mtu 1450inet 172.17.76.0 netmask 255.255.255.255 broadcast 0.0.0.0inet6 fe80::d82d:f8ff:fe69:3564 prefixlen 64 scopeid 0x20linkether da:2d:f8:69:35:64 txqueuelen 0 (Ethernet)RX packets 0 bytes 0 (0.0 B)RX errors 0 dropped 0 overruns 0 frame 0TX packets 0 bytes 0 (0.0 B)TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
## 部分内容省略
启动k8s-node2主机上的Flanneld服务
[rootk8s-node2 ~]# systemctl start flanneld
[rootk8s-node2 ~]# systemctl enable flanneld
[rootk8s-node2 ~]# systemctl daemon-reload
[rootk8s-node2 ~]# systemctl restart docker
# 查看Flannel是否与Docker在同一网段
[rootk8s-node2 ~]# ifconfig
docker0: flags4099UP,BROADCAST,MULTICAST mtu 1450inet 172.17.9.1 netmask 255.255.255.0 broadcast 172.17.9.255ether 02:42:93:83:fa:20 txqueuelen 0 (Ethernet)RX packets 0 bytes 0 (0.0 B)RX errors 0 dropped 0 overruns 0 frame 0TX packets 0 bytes 0 (0.0 B)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags4163UP,BROADCAST,RUNNING,MULTICAST mtu 1450inet 172.17.9.0 netmask 255.255.255.255 broadcast 0.0.0.0inet6 fe80::c0cb:aff:fe3d:e6df prefixlen 64 scopeid 0x20linkether c2:cb:0a:3d:e6:df txqueuelen 0 (Ethernet)RX packets 0 bytes 0 (0.0 B)RX errors 0 dropped 0 overruns 0 frame 0TX packets 0 bytes 0 (0.0 B)TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
# 部分内容省略5.6、测试Flanneld是否安装成功
在k8s-node2上测试到node1节点docker0网桥IP地址的连通性出现如下结果说明Flanneld安装成功
# k8s-node1的ip地址
[rootk8s-node2 ~]# ping 172.17.76.1
PING 172.17.76.1 (172.17.76.1) 56(84) bytes of data.
64 bytes from 172.17.76.1: icmp_seq1 ttl64 time0.283 ms
64 bytes from 172.17.76.1: icmp_seq2 ttl64 time0.560 ms六、部署Kubernetes-master组件v1.18.20 kubernetes二进制安装方式所需的二进制安装程序Google已经提供了下载可以通过地址 https://github.com/kubernetes/kubernetes/releases进行下载选择对应的版本之后从CHANGELOG页面下面二进制文件。由于网络的特殊情况相关安装程序会与文档一起发布 在k8s-master主机上依次进行如下操作部署kubernetes-master组件具体操作如下所示
6.1、添加kubectl命令环境
上传 tar zxf kubernetes-server-linux-amd64.tar.gz 软件包解压并安装kubectl命令环境
[rootk8s-master ~]# tar -zxvf kubernetes-server-linux-amd64.tar.gz
[rootk8s-master ~]# cd /root/kubernetes/server/bin/
[rootk8s-master bin]# cp kubectl /opt/kubernetes/bin/6.2、master上创建TLS Bootstrapping Token
执行以下命令创建TLS Bootstrpping Token令牌
[rootk8s-master ~]# cd /opt/kubernetes/
[rootk8s-master kubernetes]# export BOOTSTRAP_TOKEN$(head -c 16 /dev/urandom |od -An -t x | tr -d )
[rootk8s-master kubernetes]# cat token.csvEOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,system:kubelet-bootstrap
EOF
[rootk8s-master kubernetes]# cat token.csv
59ffb2ebbfcc006480d13549fa243c42,kubelet-bootstrap,10001,system:kubelet-bootstrap6.3、master创建kubelet kubeconfig
执行以下命令创建kubelet kubeconfig
[rootk8s-master kubernetes]# export KUBE_APISERVERhttps://192.168.93.101:64436.3.1、master上设置集群参数
[rootk8s-master kubernetes]# cd /root/software/ssl/
[rootk8s-master ssl]# kubectl config set-cluster kubernetes \
--certificate-authority./ca.pem \
--embed-certstrue \
--server${KUBE_APISERVER} \
--kubeconfigbootstrap.kubeconfig
# 以下是回显
Cluster kubernetes set.6.3.2、master上设置客户端认证参数
[rootk8s-master ssl]# kubectl config set-credentials kubelet-bootstrap \
--token${BOOTSTRAP_TOKEN} \
--kubeconfigbootstrap.kubeconfig
# 以下是回显
User kubelet-bootstrap set.# 查看文件内容确认server和token字段的正确性
[rootk8s-master ssl]# tail -1 bootstrap.kubeconfigtoken: 59ffb2ebbfcc006480d13549fa243c42[rootk8s-master ssl]# echo $BOOTSTRAP_TOKEN
59ffb2ebbfcc006480d13549fa243c426.3.3、master上设置上下文参数
[rootk8s-master ssl]# kubectl config set-context default \
--clusterkubernetes \
--userkubelet-bootstrap \
--kubeconfigbootstrap.kubeconfig
# 以下是回显
Context default created.6.3.4、master上设置默认上下文
[rootk8s-master ssl]# kubectl config use-context default --kubeconfigbootstrap.kubeconfig
# 以下是回显
Switched to context default.6.4、master上创建kuby-proxy kubeconfig
执行以下命令创建kuby-proxy kubeconfig
[rootk8s-master ssl]# kubectl config set-cluster kubernetes \
--certificate-authority./ca.pem \
--embed-certstrue \
--server${KUBE_APISERVER} \
--kubeconfigkube-proxy.kubeconfig
# 以下是回显
Cluster kubernetes set.[rootk8s-master ssl]# kubectl config set-credentials kube-proxy \
--client-certificate./kube-proxy.pem \
--client-key./kube-proxy-key.pem \
--embed-certstrue \
--kubeconfigkube-proxy.kubeconfig
# 以下是回显
User kube-proxy set.[rootk8s-master ssl]# kubectl config set-context default \
--clusterkubernetes \
--userkube-proxy \
--kubeconfigkube-proxy.kubeconfig
# 以下是回显
Context default created.[rootk8s-master ssl]# kubectl config use-context default \
--kubeconfigkube-proxy.kubeconfig
# 以下是回显
Switched to context default.6.5、master上部署Kube-apiserver
组件作用用于暴露kubernetes API任何资源请求/调度操作都是通过kube-apiserver提供的接口进行。提供了HTTP Rest接口的关键服务进程是kubernetes中所有资源的增、删、改、查等操作的唯一入口也是集群控制入口进程
[rootk8s-master ~]# cd /root/kubernetes/server/bin/
[rootk8s-master bin]# cp kube-controller-manager kube-scheduler kube-apiserver /opt/kubernetes/bin/
[rootk8s-master bin]# cp /opt/kubernetes/token.csv /opt/kubernetes/cfg/
[rootk8s-master bin]# cd /opt/kubernetes/bin/# 上传master.zip压缩包
[rootk8s-master bin]# unzip master.zip
[rootk8s-master bin]# chmod x *.sh
[rootk8s-master bin]# ./apiserver.sh 192.168.93.101 https://192.168.93.101:2379,https://192.168.93.102:2379,https://192.168.93.103:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.# 查看服务状态
[rootk8s-master bin]# systemctl status kube-apiserver.service 6.6、master上部署Kube-controller-manager
组件作用运行管理控制器是集群中处理常规任何的后台进程是kubernetes里所有资源对象的自动化控制中心。逻辑上每个控制器是一个单独的进程但为了降低复杂性它们都被编译成单个二进制文件并在单个进程中运行。这些控制器主要包括节点控制器Node controller、复制控制器Replication Controller、端点控制器Endpoints Controller、服务账户和令牌控制器Service Account Token Controllers
[rootk8s-master bin]# sh controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.# 查看服务状态
[rootk8s-master bin]# systemctl status kube-controller-manager6.7、master上部署kube-scheduler
组件作用是负责资源调度的进程监视新创建且没有分配到Node的Pod为Pod选择一个Node
[rootk8s-master bin]# sh scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.# 查看服务状态
[rootk8s-master bin]# systemctl status kube-scheduler6.8、master上检测组件运行是否正常
执行以下命令检测组件运行是否正常
[rootk8s-master bin]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {health:true}
etcd-1 Healthy {health:true}
etcd-2 Healthy {health:true} 七、部署Kubernetes-node组件
部署完Kubernetes-master组件后即可开始部署Kubernet-node组件。需要依次执行以下步骤
7.1、准备环境k8s-master
执行以下命令准备Kubernetes-node组件的部署环境
# 在k8s-master主机上执行
[rootk8s-master ~]# cd /root/software/ssl/
[rootk8s-master ssl]# scp *kubeconfig 192.168.93.102:/opt/kubernetes/cfg/
[rootk8s-master ssl]# scp *kubeconfig 192.168.93.103:/opt/kubernetes/cfg/
[rootk8s-master ssl]# cd /root/kubernetes/server/bin/
[rootk8s-master bin]# scp kubelet kube-proxy 192.168.93.102:/opt/kubernetes/bin/
[rootk8s-master bin]# scp kubelet kube-proxy 192.168.93.103:/opt/kubernetes/bin/# 授权kubelet-bootstrap用户绑定到系统集群角色
[rootk8s-master bin]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrolesystem:node-bootstrapper --userkubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created# 查看kubelet-bootstrap用户角色
[rootk8s-master bin]# kubectl describe clusterrolebinding kubelet-bootstrap
Name: kubelet-bootstrap
Labels: none
Annotations: none
Role:Kind: ClusterRoleName: system:node-bootstrapper
Subjects:Kind Name Namespace---- ---- ---------User kubelet-bootstrap 7.2、node1和node2部署kube-kubelet
组件作用负责Pod对容器的创建、启停等任务同时与master节点密切协作实现集群管理的基本功能
# k8s-node1和k8s-node2主机上都要执行以node1节点为例进行演示
[rootk8s-node1 ~]# cd /opt/kubernetes/bin/
[rootk8s-node1 bin]# unzip node.zip
[rootk8s-node1 bin]# chmod x *.sh# 192.168.93.100是随便填写的地址只要在同一个网段并且没有主机使用即可node2节点也需要填写192.168.93.100这个IP地址
[rootk8s-node1 bin]# sh kubelet.sh 192.168.93.102 192.168.93.100
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.7.3、node1和node2部署kube-proxy
用于实现kubernetes Service之间的通信与负载均衡机制
[rootk8s-node1 bin]# sh proxy.sh 192.168.93.102
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.[rootk8s-node2 bin]# sh proxy.sh 192.168.93.103
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.7.4、查看Node1和Node2节点组件是否安装成功
# k8s-node1
[rootk8s-node1 ~]# ps -ef | grep kube
root 10323 1 1 09:37 ? 00:01:12 /opt/kubernetes/bin/etcd --cert-file/opt/kubernetes/ssl/server.pem --key-file/opt/kubernetes/ssl/server-key.pem --peer-cert-file/opt/kubernetes/ssl/server.pem --peer-key-file/opt/kubernetes/ssl/server-key.pem --trusted-ca-file/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file/opt/kubernetes/ssl/ca.pem
root 10614 1 0 10:09 ? 00:00:01 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpointshttps://192.168.93.101:2379,https://192.168.93.102:2379,https://192.168.93.103:2379 -etcd-cafile/opt/kubernetes/ssl/ca.pem -etcd-certfile/opt/kubernetes/ssl/server.pem -etcd-keyfile/opt/kubernetes/ssl/server-key.pem
root 15327 1 0 11:09 ? 00:00:00 /opt/kubernetes/bin/kubelet --logtostderrtrue --v4 --address192.168.93.102 --hostname-override192.168.93.102 --kubeconfig/opt/kubernetes/cfg/kubelet.kubeconfig --experimental-bootstrap-kubeconfig/opt/kubernetes/cfg/bootstrap.kubeconfig --cert-dir/opt/kubrnetes/ssl --cluster-dns192.168.93.100 --cluster-domaincluster.local --fail-swap-onfalse --pod-infra-container-imageregistry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 15898 1 0 11:15 ? 00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderrtrue --v4 --hostname-override192.168.93.102 --kubeconfig/opt/kubernetes/cfg/kube-proxy.kubeconfig
root 16072 8201 0 11:17 pts/1 00:00:00 grep --colorauto kube# k8s-node2
[rootk8s-node2 ~]# ps -ef | grep kube
root 19154 1 1 09:37 ? 00:01:13 /opt/kubernetes/bin/etcd --cert-file/opt/kubernetes/ssl/server.pem --key-file/opt/kubernetes/ssl/server-key.pem --peer-cert-file/opt/kubernetes/ssl/server.pem --peer-key-file/opt/kubernetes/ssl/server-key.pem --trusted-ca-file/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file/opt/kubernetes/ssl/ca.pem
root 19309 1 0 10:12 ? 00:00:01 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpointshttps://192.168.93.101:2379,https://192.168.93.102:2379,https://192.168.93.103:2379 -etcd-cafile/opt/kubernetes/ssl/ca.pem -etcd-certfile/opt/kubernetes/ssl/server.pem -etcd-keyfile/opt/kubernetes/ssl/server-key.pem
root 23962 1 0 11:11 ? 00:00:00 /opt/kubernetes/bin/kubelet --logtostderrtrue --v4 --address192.168.93.103 --hostname-override192.168.93.103 --kubeconfig/opt/kubernetes/cfg/kubelet.kubeconfig --experimental-bootstrap-kubeconfig/opt/kubernetes/cfg/bootstrap.kubeconfig --cert-dir/opt/kubrnetes/ssl --cluster-dns192.168.93.100 --cluster-domaincluster.local --fail-swap-onfalse --pod-infra-container-imageregistry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 24351 1 0 11:15 ? 00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderrtrue --v4 --hostname-override192.168.93.103 --kubeconfig/opt/kubernetes/cfg/kube-proxy.kubeconfig
root 24562 8203 0 11:17 pts/1 00:00:00 grep --colorauto kube八、查看自动签发证书
部署完组件后Master节点将立即获取到Node节点请求证书然后允许加入集群即可
[rootk8s-master ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-9VdDpTGcQCRA-bBIpwUCSDvEloIDXSGCDm_WWS0uLqc 7m51s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
node-csr-yBYxiM6KRKlRkA1uYb8gEfIBL_uLsULMHeg4pIzznoo 10m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending# 允许节点加入集群 节点名称替换为自己的节点名称
[rootk8s-master ~]# kubectl certificate approve node-csr-9VdDpTGcQCRA-bBIpwUCSDvEloIDXSGCDm_WWS0uLqc
certificatesigningrequest.certificates.k8s.io/node-csr-9VdDpTGcQCRA-bBIpwUCSDvEloIDXSGCDm_WWS0uLqc approved
[rootk8s-master ~]# kubectl certificate approve node-csr-yBYxiM6KRKlRkA1uYb8gEfIBL_uLsULMHeg4pIzznoo
certificatesigningrequest.certificates.k8s.io/node-csr-yBYxiM6KRKlRkA1uYb8gEfIBL_uLsULMHeg4pIzznoo approved# 查看节点是否添加成功查看集群节点状态
[rootk8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.93.102 Ready none 11s v1.18.20
192.168.93.103 Ready none 25s v1.18.20至此K8S集群部署完成