网站要怎么做,二手书网站建设目标,廊坊免费推广,景安网站备案的服务码文章目录 操作流程#xff1a;上篇主节初始化地址#xff1a;前置#xff1a;Docker和K8S安装版本匹配查看0.1#xff1a;安装指定docker版本 **[1 — 8] ** [ 这些步骤主从节点前置操作一样的 ]一#xff1a;主节点操作 查看主机域名-编辑域名-域名配置二#x… 文章目录 操作流程上篇主节初始化地址前置Docker和K8S安装版本匹配查看0.1安装指定docker版本 **[1 — 8] ** [ 这些步骤主从节点前置操作一样的 ]一主节点操作 查看主机域名-编辑域名-域名配置二安装自动填充虚拟机默认没有三关闭防火墙四关闭交换空间五禁用 Selinux六 允许 ip tables 检查桥接流量七设置K8S相关系统参数7.0镜像加速7.1K8S仓库国内源7.2配置 sysctl 参数网桥和地址转发重新启动后配置不变7.2.1配置sysctl 内核参数而不重新启动 八安装K8S -- kubeletkubeadmkubectl核心组件8.1安装命令 九kubeadm init生成Node9.1: 当前从节点9.1.1区别从节点主要根据主节点生成的join命令进行join操作不用init操作 9.2: 上述会生成如下日志说明成功 10后续异常Node not ready10.1: 查看异常命令journalctl -f -u kubelet10.1.1:[root10 ~]# 异常查看journalctl -f -u kubelet 日志如下cni下的net.d配置找不到 10.2: 分析K8S集群搭建Calico安装先后顺序问题10.3: 解决调整安装顺序或拷贝主节点配置到从节点10-(2---3: 通用需要操作步骤 操作流程
主节点安装coredns - init初始化 主节点此时还没有安装calico 从节点基于主节点生成join命令加入集群 主节点安装calicoapply 生成pod此时没有调整yaml网卡 coredns 和calico pod 运行成功 但是 calico-node-cl8f2 运行失败 查看 解决链接
上篇主节初始化地址
前置Docker和K8S安装版本匹配查看
因为之前写过一篇calico一直异常步骤一样所以怀疑是版本不匹配这次看都是成功的 Kubernetes 版本 Docker 版本1.20 19.031.21 20.101.22 20.101.23 20.101.24 20.100.1安装指定docker版本
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.reposudo: yum-config-manager找不到命令 - 执行
sudo yum install yum-utilsyum list docker-ce --showduplicates | sort -r[root10 ~]# yum list docker-ce --showduplicates | sort -r
已加载插件fastestmirror
已安装的软件包
可安装的软件包* updates: mirrors.ustc.edu.cn
Loading mirror speeds from cached hostfile* extras: mirrors.ustc.edu.cn
docker-ce.x86_64 3:26.0.0-1.el7 docker-ce-stable
。。。。
docker-ce.x86_64 3:20.10.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:20.10.5-3.el7 docker-ce-stableyum install docker-ce20.10.8-3.el7 -y**[1 — 8] ** [ 这些步骤主从节点前置操作一样的 ]
一主节点操作 查看主机域名-编辑域名-域名配置
[rootlocalhost ~]# hostname
localhost.localdomain
[rootlocalhost ~]# hostnamectl set-hostname adam-init-slaver-one
[rootlocalhost ~]# hostname
adam-init-master
[rootlocalhost ~]#[rootvbox-master-01-vbox-01 ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
# 以下添加
192.168.56.104 adam-init-master
192.168.56.105 adam-init-slaver-one二安装自动填充虚拟机默认没有
[rootvbox-master-01-vbox-01 ~]# yum -y install bash-completion
已加载插件fastestmirror, product-id, search-disabled-repos, subscription-managerThis system is not registered with an entitlement server. You can use subscription-manager to register.Determining fastest mirrors* base: ftp.sjtu.edu.cn* extras: mirrors.nju.edu.cn* updates: mirrors.aliyun.com三关闭防火墙
systemctl stop firewalld
systemctl disable firewalld四关闭交换空间
free -h
swapoff -a
sed -i s/.*swap.*/#/ /etc/fstab
free -h五禁用 Selinux
sed -i “s/^SELINUXenforcing/SELINUXdisabled/g” /etc/sysconfig/selinux
[rootnodemaster /]# sed -i s/^SELINUXenforcing/SELINUXdisabled/g /etc/sysconfig/selinux六 允许 ip tables 检查桥接流量
iptables -F iptables -X iptables -F -t nat iptables -X -t nat iptables -P FORWARD ACCEPT
七设置K8S相关系统参数
7.0镜像加速
第一行驱动
tee /etc/docker/daemon.json -EOF
{registry-mirrors: [https://hnkfbj7x.mirror.aliyuncs.com],exec-opts: [native.cgroupdriversystemd]
}
EOF7.1K8S仓库国内源
cat EOF /etc/yum.repos.d/kubernetes.repo
[kubernetes]
nameKubernetes
baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
# 是否开启本仓库
enabled1
# 是否检查 gpg 签名文件
gpgcheck0
# 是否检查 gpg 签名文件
repo_gpgcheck0
gpgkeyhttps://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF7.2配置 sysctl 参数网桥和地址转发重新启动后配置不变
cat EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables 1
net.bridge.bridge-nf-call-ip6tables 1
net.ipv4.ip_forward 1
EOF7.2.1配置sysctl 内核参数而不重新启动
sysctl --system八安装K8S – kubeletkubeadmkubectl核心组件
8.1安装命令
[rootmaster local]# sudo yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0 --disableexcludeskubernetes --nogpgcheck已安装:kubeadm.x86_64 0:1.23.0-0 kubectl.x86_64 0:1.23.0-0 kubelet.x86_64 0:1.23.0-0作为依赖被安装:conntrack-tools.x86_64 0:1.4.4-7.el7 cri-tools.x86_64 0:1.26.0-0kubernetes-cni.x86_64 0:1.2.0-0 libnetfilter_cthelper.x86_64 0:1.0.0-11.el7libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2socat.x86_64 0:1.7.3.2-2.el7完毕
九kubeadm init生成Node
9.1: 当前从节点
9.1.1区别从节点主要根据主节点生成的join命令进行join操作不用init操作
9.2: 上述会生成如下日志说明成功
kubeadm join 192.168.56.104:6443 --token zxnok7.i6i4b4id4y5q1nsa --discovery-token-ca-cert-hash
[root10 ~]# kubeadm join 192.168.56.104:6443 --token zxnok7.i6i4b4id4y5q1nsa --discovery-token-ca-cert-hash sha256:7760cfca134b2df5ef7757e7a6756a13e66415665dd48ae94a20d98b812c277d
[preflight] Running pre-flight checks[WARNING Service-Docker]: docker service is not enabled, please run systemctl enable docker.service[WARNING Service-Kubelet]: kubelet service is not enabled, please run systemctl enable kubelet.service
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -o yaml
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run kubectl get nodes on the control-plane to see this node join the cluster.
10后续异常Node not ready
10.1: 查看异常命令journalctl -f -u kubelet
10.1.1:[root10 ~]# 异常查看journalctl -f -u kubelet 日志如下cni下的net.d配置找不到
[root10 ~]# journalctl -f -u kubelet
-- Logs begin at 三 2024-04-10 00:24:03 CST. --
4月 10 02:33:33 adam-init-slaver-one kubelet[9282]: E0410 02:33:33.734603 9282 kubelet.go:2347] Container runtime network not ready networkReadyNetworkReadyfalse reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
4月 10 02:33:38 adam-init-slaver-one kubelet[9282]: I0410 02:33:38.160774 9282 cni.go:240] Unable to update cni config errno networks found in /etc/cni/net.d
4月 10 02:33:38 adam-init-slaver-one kubelet[9282]: E0410 02:33:38.742263 9282 kubelet.go:2347] Container runtime network not ready networkReadyNetworkReadyfalse reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
4月 10 02:33:40 adam-init-slaver-one kubelet[9282]: E0410 02:33:40.168399 9282 pod_workers.go:918] Error syncing pod, skipping errfailed to \StartContainer\ for \install-cni\ with CrashLoopBackOff: \back-off 20s restarting failed containerinstall-cni podcalico-node-bdbjk_kube-system(21340f35-c5e1-4e42-a006-3ad6ee4c8a09)\ podkube-system/calico-node-bdbjk podUID21340f35-c5e1-4e42-a006-3ad6ee4c8a09
4月 10 02:33:43 adam-init-slaver-one kubelet[9282]: I0410 02:33:43.161188 9282 cni.go:240] Unable to update cni config errno networks found in /etc/cni/net.d
4月 10 02:33:43 adam-init-slaver-one kubelet[9282]: E0410 02:33:43.750005 9282 kubelet.go:2347] Container runtime network not ready networkReadyNetworkReadyfalse reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized10.2: 分析K8S集群搭建Calico安装先后顺序问题
当前操作为先安装Master接着安装Calico有net.d文件安装完Calico再对从节点机器初始化K8S相关操作net.d没有同步到从机器
10.3: 解决调整安装顺序或拷贝主节点配置到从节点
主节点操作
[root10 ~]# scp /etc/cni/net.d/* rootsss-slaver-two:/etc/cni/net.d/此slaver-one为主节点
[rootsv-slaver-one ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-64cc74d646-4z8zf 1/1 Running 2 (9h ago) 9h
kube-system calico-node-cl8f2 1/1 Running 0 9h
kube-system calico-node-pfxnd 1/1 Running 0 9h
kube-system coredns-6d8c4cb4d-8q7tb 1/1 Running 2 (9h ago) 9h
kube-system coredns-6d8c4cb4d-m2gz2 1/1 Running 2 (9h ago) 9h
kube-system etcd-sv-slaver-one 1/1 Running 3 (9h ago) 9h
kube-system kube-apiserver-sv-slaver-one 1/1 Running 3 (9h ago) 9h
kube-system kube-controller-manager-sv-slaver-one 1/1 Running 4 (3h59m ago) 9h
kube-system kube-proxy-6kfnf 1/1 Running 2 (9h ago) 9h
kube-system kube-proxy-s9pzm 1/1 Running 3 (9h ago) 9h
kube-system kube-scheduler-sv-slaver-one 1/1 Running 4 (3h59m ago) 9h
[rootsv-slaver-one ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
sv-master Ready none 9h v1.23.0
sv-slaver-one Ready control-plane,master 9h v1.23.0
[rootsv-slaver-one ~]#10-(2—3: 通用需要操作步骤
执行 [root10 docker]# systemctl restart docker [root10 docker]# systemctl daemon-reload [root10 docker]# systemctl restart kubelet 编辑calico 网卡信息 重新运行calico
如果还没用重启虚拟机重复 10-(2—3操作