上海虹口网站制作,秦皇岛网站推广价钱,优惠券网站是不是很难做,安徽建站费用1. kubernetes集群规划
主机IP主机名主机配置角色192.168.100.3master12C/4G管理节点192.168.100.4node12C/4G工作节点192.168.100.5node22C/4G工作节点
2. 集群前期环境准备
#xff08;1#xff09;初始化脚本
#!/bin/bash
echo —— 关闭防火墙与SE…1. kubernetes集群规划
主机IP主机名主机配置角色192.168.100.3master12C/4G管理节点192.168.100.4node12C/4G工作节点192.168.100.5node22C/4G工作节点
2. 集群前期环境准备
1初始化脚本
#!/bin/bash
echo —— 关闭防火墙与SELinux ——
sleep 3
systemctl disable firewalld --now /dev/null
setenforce 0
sed -i s/SELINUXenforcing/SELINUXdisabled/g /etc/selinux/configecho —— 创建阿里仓库 ——
sleep 3
mv /etc/yum.repos.d/* /tmp
curl -o /etc/yum.repos.d/centos.repo https://mirrors.aliyun.com/repo/Centos-7.repo /dev/null
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo /dev/nullecho —— 设置时区并同步时间 ——
sleep 3
timedatectl set-timezone Asia/Shanghai
yum install -y chrony /dev/null
systemctl enable chronyd --now /dev/null
sed -i /^server/s/^/# / /etc/chrony.conf
sed -i /^# server 3.centos.pool.ntp.org iburst/a\server ntp1.aliyun.com iburst\nserver ntp2.aliyun.com iburst\nserver ntp3.aliyun.com iburst /etc/chrony.conf
systemctl restart chronyd /dev/null
chronyc sources /dev/nullecho —— 设置系统最大打开文件数 ——
sleep 3
if ! grep * soft nofile 65535 /etc/security/limits.conf /dev/null; then
cat /etc/security/limits.conf EOF
* soft nofile 65535 # 软限制
* hard nofile 65535 # 硬限制
EOF
fiecho —— 系统内核优化 ——
sleep 3
cat /etc/sysctl.conf EOF
net.ipv4.tcp_syncookies 1 # 防范SYN洪水攻击0为关闭
net.ipv4.tcp_max_tw_buckets 20480 # 此项参数可以控制TIME_WAIT套接字的最大数量避免Squid服务器被大量的TIME_WAIT套接字拖死
net.ipv4.tcp_max_syn_backlog 20480 # 表示SYN队列的长度默认为1024加大队列长度为8192可以容纳更多等待连接的网络连接数
net.core.netdev_max_backlog 262144 # 每个网络接口 接受数据包的速率比内核处理这些包的速率快时允许发送到队列的数据包的最大数目
net.ipv4.tcp_fin_timeout 20 # FIN-WAIT-2状态的超时时间避免内核崩溃
EOFecho —— 减少SWAP使用 ——
sleep 3
echo 0 /proc/sys/vm/swappinessecho —— 安装系统性能分析工具及其他 ——
sleep 3
yum install -y vim net-tools lsof wget lrzsz /dev/null2配置主机映射
cat /etc/hosts EOF
192.168.100.3 k8s-master
192.168.100.4 k8s-node1
192.168.100.5 k8s-node2
EOF3. Docker环境安装
1安装Docker
[rootmaster ~]# curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[rootmaster ~]# yum list docker-ce --showduplicates | sort -r* updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
Installed Packages* extras: mirrors.aliyun.com
docker-ce.x86_64 3:26.1.4-1.el7 docker-ce-stable
docker-ce.x86_64 3:26.1.4-1.el7 docker-ce-stable
docker-ce.x86_64 3:26.1.3-1.el7 docker-ce-stable
docker-ce.x86_64 3:26.1.2-1.el7 docker-ce-stable
docker-ce.x86_64 3:26.1.1-1.el7 docker-ce-stable
docker-ce.x86_64 3:26.1.0-1.el7 docker-ce-stable
...
[rootmaster ~]# yum -y install docker-ce
[rootmaster ~]# systemctl enable docker --now
[rootmaster ~]# docker version
Client: Docker Engine - CommunityVersion: 26.1.4API version: 1.45Go version: go1.21.11Git commit: 5650f9bBuilt: Wed Jun 5 11:32:04 2024OS/Arch: linux/amd64Context: defaultServer: Docker Engine - CommunityEngine:Version: 26.1.4API version: 1.45 (minimum version 1.24)Go version: go1.21.11Git commit: de5c9cfBuilt: Wed Jun 5 11:31:02 2024OS/Arch: linux/amd64Experimental: falsecontainerd:Version: 1.6.33GitCommit: d2d58213f83a351ca8f528a95fbd145f5654e957runc:Version: 1.1.12GitCommit: v1.1.12-0-g51d5e94docker-init:Version: 0.19.0GitCommit: de40ad02配置镜像加速器和Cgroup驱动
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json EOF
{registry-mirrors: [https://docker.aityp.com,https://docker.m.daocloud.io,https://reg-mirror.qiniu.com,https://k8s.m.daocloud.io,https://elastic.m.daocloud.io,https://gcr.m.daocloud.io,https://ghcr.m.daocloud.io,https://k8s-gcr.m.daocloud.io,https://mcr.m.daocloud.io,https://nvcr.m.daocloud.io,https://quay.m.daocloud.io,https://jujucharms.m.daocloud.io,https://rocks-canonical.m.daocloud.io,https://d3p1s1ji.mirror.aliyuncs.com],exec-opts: [native.cgroupdriversystemd]
}
EOFsudo systemctl daemon-reload
sudo systemctl restart docker3安装cri-dockerd
Docker与Kubernetes通信的中间程序
K8s的1.24版本以后移除了docker-shim而Docker Engine默认又不支持CRI规范因而二者将无法直接完成整合为此Mirantis和Docker联合创建了cri-dockerd项目用于为Docker Engine提供一个能够支持到CRI规范的垫片从而能够让Kubernetes基于CRI控制Docker 所以想在K8s的1.24版本及以后的版本中使用docker需要安装cri-dockerd然后K8s集群通过cri-dockerd联系到docker注意每个节点都要安装
项目地址https://github.com/Mirantis/cri-dockerd
[rootmaster ~]# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.2/cri-dockerd-0.3.2-3.el7.x86_64.rpm
[rootmaster ~]# rpm -ivh cri-dockerd-0.3.2-3.el7.x86_64.rpm4编辑cri-docker.service文件
[rootmaster ~]# vim /usr/lib/systemd/system/cri-docker.service
...
[Service]
Typenotify
ExecStart/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-imageregistry.aliyuncs.com/google_containers/pause:3.9 --network-plugincni --cni-bin-dir/opt/cni/bin --cni-cache-dir/var/lib/cni/cache --cni-conf-dir/etc/cni/net.d
...重启服务
systemctl daemon-reload
systemctl restart cri-docker.service4. 配置阿里云YUM源
1添加k8s源
[rootmaster ~]# cat /etc/yum.repos.d/kubernetes.repo EOF
[kubernetes]
nameKubernetes
baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled1
gpgcheck1
repo_gpgcheck1
gpgkeyhttps://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF2安装k8s工具
[rootmaster ~]# yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0kubeadm用于初始化集群并配置集群所需的组件并生成对应的安全证书和令牌kubelet负责与 Master 节点通信并根据 Master 节点的调度决策来创建、更新和删除 Pod同时维护 Node 节点上的容器状态kubectl用于管理k8集群的一个命令行工具
设置kubelet开启自启不需要直接开启初始化过程会启动
[rootmaster ~]# systemctl enable kubelet3初始化集群
命令行方式
kubeadm init \--apiserver-advertise-address192.168.100.3 \--image-repository registry.aliyuncs.com/google_containers \--kubernetes-version v1.28.0 \--service-cidr10.96.0.0/12 \--pod-network-cidr10.244.0.0/16 \--cri-socketunix:///var/run/cri-dockerd.sockyaml文件方式
[rootmaster ~]# kubeadm config print init-defaults kubeadm-config.yaml
[rootmaster ~]# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.100.3 ### 本地IP地址bindPort: 6443
nodeRegistration:criSocket: unix:///var/run/cri-dockerd.sockimagePullPolicy: IfNotPresentname: master ### 修改主机名taints: null
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers ### 修改仓库地址
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12
scheduler: {}初始化集群
[rootmaster1 ~]# kubeadm init --config kubeadm-config.yaml --upload-certs#选项说明
--upload-certs //初始化过程将生成证书并将其上传到etcd存储中否则节点无法加入集群初始化失败使用以下命令重置
[rootmaster1 ~]# kubeadm reset --cri-socket /var/run/cri-dockerd.sock4配置认证文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configexport KUBECONFIG/etc/kubernetes/admin.conf使用kubectl工具查看节点状态
[rootmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane 20s v1.28.0注由于网络插件还没有部署节点会处于NotReady状态
5将node节点加入集群
kubeadm init \--apiserver-advertise-address192.168.100.3 \--image-repository registry.aliyuncs.com/google_containers \--kubernetes-version v1.28.0 \--service-cidr10.96.0.0/12 \--pod-network-cidr10.244.0.0/16 \--cri-socketunix:///var/run/cri-dockerd.sock5. 配置Calico网络组件
1下载配置文件
wget https://docs.projectcalico.org/manifests/tigera-operator.yaml
wget https://docs.projectcalico.org/manifests/custom-resources.yaml2编辑配置文件
[rootmaster ~]# vim custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:name: default
spec:# Configures Calico networking.calicoNetwork:# Note: The ipPools section cannot be modified post-install.ipPools:- blockSize: 26cidr: 10.244.0.0/16 ### 修改为--pod-network-cidr地址encapsulation: VXLANCrossSubnetnatOutgoing: EnablednodeSelector: all()---# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata: name: default
spec: {}3部署Calico网络
注意使用apply会有如以下报错
[rootmaster ~]# kubectl apply -f tigera-operator.yaml
...
The CustomResourceDefinition installations.operator.tigera.io is invalid: metadata.annotations:
Too long: must have at most 262144 bytes使用以下命令运行部署
[rootmaster ~]# kubectl create -f tigera-operator.yaml
[rootmaster ~]# kubectl create -f custom-resources.yaml 查看集群Pod运行状态
[rootmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 18m v1.28.0
node Ready none 17m v1.28.0[rootmaster ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-7b9c9fb95d-7w666 1/1 Running 0 12s
calico-apiserver calico-apiserver-7b9c9fb95d-hfjjg 1/1 Running 0 12s
calico-system calico-kube-controllers-685f7c9b88-rpmvl 1/1 Running 0 29s
calico-system calico-node-d52d7 1/1 Running 0 30s
calico-system calico-node-t6qpr 1/1 Running 0 30s
calico-system calico-typha-589b7cd4b4-hvq7q 1/1 Running 0 30s
calico-system csi-node-driver-crmm9 2/2 Running 0 29s
calico-system csi-node-driver-kjnlc 2/2 Running 0 30s
kube-system coredns-66f779496c-6vpq8 1/1 Running 0 17m
kube-system coredns-66f779496c-khqb4 1/1 Running 0 17m
kube-system etcd-master 1/1 Running 0 18m
kube-system kube-apiserver-master 1/1 Running 0 18m
kube-system kube-controller-manager-master 1/1 Running 0 18m
kube-system kube-proxy-9ll4p 1/1 Running 0 16m
kube-system kube-proxy-wpgnh 1/1 Running 0 18m
kube-system kube-scheduler-master 1/1 Running 0 18m
tigera-operator tigera-operator-8547bd6cc6-vmjvq 1/1 Running 0 39s4测试部署
[rootmaster ~]# kubectl create deployment --image nginx:1.20.2 nginx
[rootmaster ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-6f974c44c8-xzvwg 1/1 Running 0 65s
[rootmaster ~]# kubectl describe pod nginx-6f974c44c8-xzvwg
...
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 64s default-scheduler Successfully assigned default/nginx-6f974c44c8-xzvwg to nodeNormal Pulling 63s kubelet Pulling image nginx:1.20.2Normal Pulled 3s kubelet Successfully pulled image nginx:1.20.2 in 1m0.406s (1m0.406s including waiting)Normal Created 2s kubelet Created container nginxNormal Started 2s kubelet Started container nginx