微网站建设需付费吗,聊城网页设计公司,企业如何创建品牌,用mediawiki做的网站安装环境
这里使用pve虚拟机搭建三台centos机器#xff0c;搭建过程参考: Centos篇-Centos Minimal安装
此次安装硬件配置
CPU#xff1a;2C 内存#xff1a;2G 存储#xff1a;64G
环境说明
操作系统#xff1a;Centos 7.9 内核版本#xff1a;6.2.0-1.el7.elrepo…安装环境
这里使用pve虚拟机搭建三台centos机器搭建过程参考: Centos篇-Centos Minimal安装
此次安装硬件配置
CPU2C 内存2G 存储64G
环境说明
操作系统Centos 7.9 内核版本6.2.0-1.el7.elrepo.x86_64 master192.168.1.12 node1192.168.1.13 node2192.168.1.14 已经为三台目标机设置了ssh公钥登录实现方式参考Centos篇-Centos ssh公钥登录
环境准备
关闭防火墙(firewalld)和selinux
关闭防火墙指令
systemctl stop firewalld systemctl disable firewalld iptables -Ffirewalld和iptables是前后有两代方案一般这俩可以先关闭然后在部署k8s时如果确定使用iptables可以再开启如果使用其他方式则firewalld和iptables均保持关闭就可以。
关闭selinux指令
sed -i s/enforcing/disabled/ /etc/selinux/config setenforce 0Selinux是内核级别的一个安全模块通过安全上下文的方式控制应用服务的权限是应用和操作系统之间的一道ACL但不是所有的程序都会去适配这个模块不适配的话开着也不起作用何况还有规则影响到正常的服务。比如权限等等相关的问题所以一般要关闭掉。
关闭swap分区
临时关闭指令
swapoff -a永久关闭指令
sed -ri s/.*swap.*/#/ /etc/fstab可以先临时关闭再永久关闭。 Swap会导致docker的运行不正常性能下降是个bug现在不知道是否有修复但是一般关闭swap已经成了通用方案。
修改host文件
因为很多主机的名称是一样的可能是安装时默认的也可能是虚拟机克隆的所以这里设置hostname来进行区分如果像我一样在安装系统时就定义了不同的主机名那么可以忽略。 master
hostnamectl set-hostname centos-k8s-master.localnode1:
hostnamectl set-hostname centos-k8s-node1.localnode2:
hostnamectl set-hostname centos-k8s-node2.localvim /etc/hosts 添加如下内容
192.168.1.12 centos-k8s-master.local
192.168.1.13 centos-k8s-node1.local
192.168.1.14 centos-k8s-node2.local这样就可以根据主机的hostname名解析ip了。
修改内核参数
cat /etc/sysctl.d/k8s.conf EOF
net.bridge.bridge-nf-call-ip6tables 1
net.bridge.bridge-nf-call-iptables 1
net.ipv4.ip_forward 1
EOFsysctl --system加载ip_vs内核模块
如果kube-proxy 模式为ip_vs则必须加载本文采用iptables
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4设置开机启动加载
cat /etc/modules-load.d/ip_vs.conf EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOF更新内核(可选)
这里我是更新了自己系统的内核但是不是必须的可以忽略此步骤。
启动ELRepo仓库
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm当ELRepo仓库启用后可以使用下面的指令列出可以使用的内核包
yum --disablerepo* --enablerepoelrepo-kernel list available安装内核
安装稳定版本内核
yum --enablerepoelrepo-kernel install kernel-ml -y 设置默认内核
设置GRUB默认的内核版本L
grub2-set-default 0重启
init 6验证
使用下面的命令查看内核版本是否更新成功
uname -r新内核设置默认内核
打开并编辑/etc/default/grub并设置
GRUB_DEFAULT0。意思是 GRUB 初始化页面的第一个内核将作为默认内核。
GRUB_TIMEOUT5
GRUB_DEFAULT0
GRUB_DISABLE_SUBMENUtrue
GRUB_TERMINAL_OUTPUTconsole
GRUB_CMDLINE_LINUXrd.lvm.lvcentos/root rd.lvm.lvcentos/swap crashkernelauto rhgb quiet
GRUB_DISABLE_RECOVERYtrue再重启完成。
安装docker
配置阿里云yum源
yum install wget -y
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo安装指定版本docker
列出所有支持的docker版本
yum list docker-ce.x86_64 --showduplicates |sort选择一个版本进行安装此处我安装的是23.0.1-1.el7
yum -y install docker-ce-23.0.1-1.el7 docker-ce-cli-23.0.1-1.el7编辑docker配置文件
编辑/etc/docker/daemon.json
mkdir /etc/docker/
cat /etc/docker/daemon.json EOF
{
registry-mirrors: [https://gqs7xcfd.mirror.aliyuncs.com,https://hub-mirror.c.163.com],
exec-opts: [native.cgroupdriversystemd],
log-driver: json-file,
log-opts: {
max-size: 100m
},
storage-driver: overlay2
}
EOF启动docker服务
systemctl daemon-reload systemctl enable docker systemctl start docker安装kubeadm、kubelet、kubectl
配置k8s yum阿里云源
cat /etc/yum.repos.d/kubernetes.repo EOF
[kubernetes]
nameKubernetes
baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled1
gpgcheck1
repo_gpgcheck1
gpgkeyhttps://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF列出kubelet指定版本
因为kubeadm、kubelet、kubectl需要安装同版本所以一般查看kubelet就可以了其余的使用同版本就ok
yum list kubelet --showduplicates安装指定版本的kubeadm、kubelet、kubectl
yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6设置kubelet开机自启
systemctl enable kubelet部署k8s master节点
master节点初始化
kubeadm init \--kubernetes-version 1.23.6 \--apiserver-advertise-address192.168.1.12 \--service-cidr10.96.0.0/16 \--pod-network-cidr10.245.0.0/16 \--image-repository registry.aliyuncs.com/google_containers -kubernetes-version版本一定要填与上面一致我这里是1.23.6。 –service-cidr 指定service网络不能和node网络冲突 –pod-network-cidr 指定pod网络不能和node网络、service网络冲突 –image-repository registry.aliyuncs.com/google_containers 指定镜像源由于默认拉取镜像地址k8s.gcr.io国内无法访问这里指定阿里云镜像仓库地址。
等待拉取镜像
这里可以直接使用下面的指令提前拉取镜像
kubeadm --kubernetes-version 1.23.6 config images list一般使用阿里云的镜像源后就慢不到哪里去了。 初始化结束后会在屏幕上出现两条重要的打印 1、类似于initialized successfully! 2、kubeadm join xxx.xxx.xxx.xxx:6443 --token … 打印1是证明master节点初始化成功打印2是用来加入集群的命令在woker节点上使用这里就不贴出来了。
配置kubectl
这里是还有三条指令也是在初始化成功后输出的是用来创建集群kubeconfig文件使用的命令
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/configwoker节点加入集群
在各woker节点也要走一遍上面的流程走一遍(除了初始化master节点)
woker节点加入集群
使用在初始化master节点成功后输出的加入集群命令将woker节点加入集群我的命令是
kubeadm join 192.168.1.12:6443 --token 37xfed.jqnw7nsll8jhdfl8 \--discovery-token-ca-cert-hash sha256:27b08986844ca2306353287507370ce7ba0caef7d4608860db4c85c762149bf6在所有woker节点上都执行这条命令。
在master节点上查看集群节点
kubectl get nodes差不多这个样子 但是我这个是已经成功运行之后的在安装网络插件之前应该都是NotReady状态。
安装网络插件-flannel
安装 flannel
从官网下载flannel yaml文件
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml或者直接使用我的
apiVersion: v1
kind: Namespace
metadata:labels:k8s-app: flannelpod-security.kubernetes.io/enforce: privilegedname: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: flannelname: flannelnamespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: flannelname: flannel
rules:
- apiGroups:- resources:- podsverbs:- get
- apiGroups:- resources:- nodesverbs:- get- list- watch
- apiGroups:- resources:- nodes/statusverbs:- patch
- apiGroups:- networking.k8s.ioresources:- clustercidrsverbs:- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: flannelname: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-flannel
---
apiVersion: v1
data:cni-conf.json: |{name: cbr0,cniVersion: 0.3.1,plugins: [{type: flannel,delegate: {hairpinMode: true,isDefaultGateway: true}},{type: portmap,capabilities: {portMappings: true}}]}net-conf.json: |{Network: 10.245.0.0/16,Backend: {Type: vxlan}}
kind: ConfigMap
metadata:labels:app: flannelk8s-app: flanneltier: nodename: kube-flannel-cfgnamespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:labels:app: flannelk8s-app: flanneltier: nodename: kube-flannel-dsnamespace: kube-flannel
spec:selector:matchLabels:app: flannelk8s-app: flanneltemplate:metadata:labels:app: flannelk8s-app: flanneltier: nodespec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxcontainers:- args:- --ip-masq- --kube-subnet-mgrcommand:- /opt/bin/flanneldenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: 5000image: docker.io/flannel/flannel:v0.21.2name: kube-flannelresources:requests:cpu: 100mmemory: 50MisecurityContext:capabilities:add:- NET_ADMIN- NET_RAWprivileged: falsevolumeMounts:- mountPath: /run/flannelname: run- mountPath: /etc/kube-flannel/name: flannel-cfg- mountPath: /run/xtables.lockname: xtables-lockhostNetwork: trueinitContainers:- args:- -f- /flannel- /opt/cni/bin/flannelcommand:- cpimage: docker.io/flannel/flannel-cni-plugin:v1.1.2name: install-cni-pluginvolumeMounts:- mountPath: /opt/cni/binname: cni-plugin- args:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistcommand:- cpimage: docker.io/flannel/flannel:v0.21.2name: install-cnivolumeMounts:- mountPath: /etc/cni/net.dname: cni- mountPath: /etc/kube-flannel/name: flannel-cfgpriorityClassName: system-node-criticalserviceAccountName: flanneltolerations:- effect: NoScheduleoperator: Existsvolumes:- hostPath:path: /run/flannelname: run- hostPath:path: /opt/cni/binname: cni-plugin- hostPath:path: /etc/cni/net.dname: cni- configMap:name: kube-flannel-cfgname: flannel-cfg- hostPath:path: /run/xtables.locktype: FileOrCreatename: xtables-lock
官网下载的yaml文件需要注意net-conf.json内部的Network要与初始化master节点的pod-network-cidr保持一致。 官网默认下载的是10.244.0.0/16 我修改为了10.245.0.0/16
执行yaml文件
kubectl apply -f kube-flannel.yaml查看flannel部署结果
kubectl -n kube-system get pods -o wide等待flannel节点ready。
查看各个woker节点的状态
等flanner部署成功后应该所有节点都ready了。
修改kube-proxy的模式为iptables
如果centos的内核高于4则可以忽略
kubectl get cm kube-proxy -n kube-system -o yaml | sed s/mode: /mode: iptables/ | kubectl apply -f -
kubectl -n kube-system rollout restart daemonsets.apps kube-proxy
kubectl -n kube-system rollout restart daemonsets.apps kube-flannel-ds部署metrics-server
部署metrics-server
metrics-server.yaml
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: truerbac.authorization.k8s.io/aggregate-to-edit: truerbac.authorization.k8s.io/aggregate-to-view: truename: system:aggregated-metrics-reader
rules:
- apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-servername: system:metrics-server
rules:
- apiGroups:- resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegator
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: system:metrics-server
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: v1
kind: Service
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:replicas: 1selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir/tmp- --secure-port443- --kubelet-preferred-address-typesInternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution15s# 跳过tls,解决cannot validate certificate for 192.168.65.3 because it doesn’t contain any IP SANs报错- --kubelet-insecure-tlsimage: k8s.gcr.io/metrics-server/metrics-server:v0.5.0imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSinitialDelaySeconds: 20periodSeconds: 10resources:requests:cpu: 100mmemory: 200MisecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dir- mountPath: /etc/localtimename: host-timenodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir- name: host-timehostPath:path: /etc/localtime---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.io
spec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100metrics-server命令
kubectl apply -f metrics-server.yaml