做软件需要网站,宝安网站设计项目,浏览器直接进入网站的注意事项,网站被黑应该怎么做Centos7.9部署单节点K8S环境
通过Centos extras镜像源安装K8S环境#xff0c;优点是方便快捷#xff0c;缺点是版本较低#xff0c;安装后的版本为1.5.2。
1. 准备工作
关闭selinux
[rootlocalhost ~]# cat /etc/selinux/config# This file controls the state of SELin…Centos7.9部署单节点K8S环境
通过Centos extras镜像源安装K8S环境优点是方便快捷缺点是版本较低安装后的版本为1.5.2。
1. 准备工作
关闭selinux
[rootlocalhost ~]# cat /etc/selinux/config# This file controls the state of SELinux on the system.
# SELINUX can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUXdisabled
# SELINUXTYPE can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPEtargeted
关闭防火墙
[rootlocalhost ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)2. 安装kubernetes和etcd
yum下载安装
yum install etcd kubernetes -y
如果提示docker组件冲突需要卸载现有的docker组件
...
Error: docker-ce-cli conflicts with 2:docker-1.13.1-210.git7d71120.el7.centos.x86_64
Error: docker-ce conflicts with 2:docker-1.13.1-210.git7d71120.el7.centos.x86_64You could try using --skip-broken to work around the problem
...
# 卸载环境现有的docker环境
yum remove docker*修改kube-apiserver配置文件
[rootlocalhost ~]# cat /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
## The address on the local server to listen to.
KUBE_API_ADDRESS--insecure-bind-address0.0.0.0 # #127.0.0.1改成0.0.0.0# The port on the local server to listen on.
# KUBE_API_PORT--port8080# Port minions listen on
# KUBELET_PORT--kubelet-port10250# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS--etcd-servershttp://127.0.0.1:2379# Address range to use for services
KUBE_SERVICE_ADDRESSES--service-cluster-ip-range10.254.0.0/16# default admission control policies
KUBE_ADMISSION_CONTROL--admission-controlNamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota # 修改策略# Add your own!
KUBE_API_ARGS启动配置服务
启动服务
systemctl start etcd
systemctl start docker
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy配置服务自启动
systemctl enable etcd
systemctl enable docker
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl enable kubelet
systemctl enable kube-proxy查看环境信息
# 查看k8s版本使用该种方式部署的k8s版本较低
[rootlocalhost ~]# kubectl version
Client Version: version.Info{Major:1, Minor:5, GitVersion:v1.5.2, GitCommit:269f928217957e7126dc87e6adfa82242bfe5b1e, GitTreeState:clean, BuildDate:2017-07-03T15:31:10Z, GoVersion:go1.7.4, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:5, GitVersion:v1.5.2, GitCommit:269f928217957e7126dc87e6adfa82242bfe5b1e, GitTreeState:clean, BuildDate:2017-07-03T15:31:10Z, GoVersion:go1.7.4, Compiler:gc, Platform:linux/amd64}
# 查看节点列表
[rootlocalhost ~]# kubectl get nodes
NAME STATUS AGE
127.0.0.1 Ready 3h3. 部署示例应用
创建两个yaml文件分别用于部署nginx的deployment和service。
[rootlocalhost ~]# cat nginx-deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: nginx-deploymentnamespace: defaultlabels:web: nginx
spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.15ports:- containerPort: 80
# 创建pod
[rootlocalhost ~] kubectl create -f nginx-deploy.yaml
[rootlocalhost ~]# cat nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:name: nginx-demo
spec:type: NodePortports:- port: 80nodePort: 30080selector:app: nginx
# 创建sevice
[rootlocalhost ~] kubectl create -f nginx-svc.yaml# 查看创建的k8s资源
[rootlocalhost ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-3856710913-4wcvd 1/1 Running 1 1h
nginx-deployment-3856710913-nmf32 1/1 Running 1 1h
nginx-deployment-3856710913-v0mcz 1/1 Running 1 1h
[rootlocalhost ~]# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 none 443/TCP 3h
nginx-demo 10.254.146.200 nodes 80:30080/TCP 1h访问测试使用浏览器或者curl命令访问nginx
[rootlocalhost ~]# curl http://192.168.226.133:30080
!DOCTYPE html
html
head
titleWelcome to nginx!/title
stylebody {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
/style
/head
body
h1Welcome to nginx!/h1
pIf you see this page, the nginx web server is successfully installed and
working. Further configuration is required./ppFor online documentation and support please refer to
a hrefhttp://nginx.org/nginx.org/a.br/
Commercial support is available at
a hrefhttp://nginx.com/nginx.com/a./ppemThank you for using nginx./em/p
/body
/html测试访问pod成功。
4. 部署遇到的问题记录
4.1 docker服务无法启动–Error starting daemon: layer does not exist
docker服务无法启动查看docker服务报错Error starting daemon: layer does not exist
解决办法
使用下面的脚本清空/var/lib/docker脚本内容如下
#!/bin/sh
set -edir$1if [ -z $dir ]; then{echo This script is for destroying old /var/lib/docker directories more safely thanecho rm -rf, which can cause data loss or other serious issues.echoecho usage: $0 directoryecho ie: $0 /var/lib/docker} 2exit 1
fiif [ $(id -u) ! 0 ]; thenecho 2 error: $0 must be run as rootexit 1
fiif [ ! -d $dir ]; thenecho 2 error: $dir is not a directoryexit 1
fidir$(readlink -f $dir)echo
echo Nuking $dir ...
echo (if this is wrong, press CtrlC NOW!)
echo( set -x; sleep 10 )
echodir_in_dir() {inner$1outer$2[ ${inner#$outer} ! $inner ]
}# lets start by unmounting any submounts in $dir
# (like -v /home:... for example - DONT DELETE MY HOME DIRECTORY BRU!)
for mount in $(awk { print $5 } /proc/self/mountinfo); domount$(readlink -f $mount || true)if dir_in_dir $mount $dir; then( set -x; umount -f $mount )fi
done# now, lets go destroy individual btrfs subvolumes, if any exist
if command -v btrfs /dev/null 21; thenroot$(df $dir | awk NR1 { print $NF })root${root#/} # if root is /, we want it to become for subvol in $(btrfs subvolume list -o $root/ 2/dev/null | awk -F path { print $2 } | sort -r); dosubvolDir$root/$subvolif dir_in_dir $subvolDir $dir; then( set -x; btrfs subvolume delete $subvolDir )fidone
fi# finally, DESTROY ALL THINGS
( set -x; rm -rf $dir )将脚本保持为docker-recovery.sh运行命令sh docker-recovery.sh /var/lib/docker进行修复。
4.2 拉取pod-infrastructure镜像失败
部署创建pod时会从红帽的官方镜像仓库拉取基础容器pod报错如下
Error syncing pod, skipping: failed to StartContainer for POD with ErrImagePull: image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)Error syncing pod, skipping: failed to StartContainer for POD with ImagePullBackOff: Back-off pulling image \registry.access.redhat.com/rhel7/pod-infrastructure:latest\解决办法
node节点包括master节点执行如下操作
查看/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt 是一个软链接但是链接过去后并没有真实的/etc/rhsm。使用yum进行安装
[rootlocalhost ~]# ls -alh /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt
lrwxrwxrwx. 1 root root 27 Jun 10 05:34 /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt - /etc/rhsm/ca/redhat-uep.pem
[rootlocalhost ~]# yum install *rhsm* -y下载安装证书
[rootlocalhost ~]# wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
[rootlocalhost ~]# rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem拉取镜像
[rootlocalhost ~]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest Trying to pull repository registry.access.redhat.com/rhel7/pod-infrastructure ...
latest: Pulling from registry.access.redhat.com/rhel7/pod-infrastructure
26e5ed6899db: Pull complete
66dbe984a319: Pull complete
9138e7863e08: Pull complete
Digest: sha256:47db25d46e39f338142553f899cedf6b0ad9f04c6c387a94b6b0964b7d1b7678
Status: Downloaded newer image for registry.access.redhat.com/rhel7/pod-infrastructure:latest