当前位置: 首页 > news >正文

中国建设银行网站分期通网站开发对显卡的要求

中国建设银行网站分期通,网站开发对显卡的要求,新乡网站建设官网,淘宝网官网【Kubernetes】centos安装Kubernetes集群 1、环境准备 系统centos7 配置yum源参考文章 Centos系统换yum源 yum -y update 步骤1-3是所有主机都要配置#xff0c;主机名和hosts配置完后可以使用工具命令同步 1.1 主机 一主二从 主机名ipk8smaster192.168.59.148k8snode11…【Kubernetes】centos安装Kubernetes集群 1、环境准备 系统centos7 配置yum源参考文章 Centos系统换yum源 yum -y update 步骤1-3是所有主机都要配置主机名和hosts配置完后可以使用工具命令同步 1.1 主机 一主二从 主机名ipk8smaster192.168.59.148k8snode1192.168.59.149k8snode2192.168.59.150 分别设置主机名并添加hosts映射 hostnamectl set-hostname k8smastervim /etc/hosts192.168.59.148 k8smaster 192.168.59.149 k8snode1 192.168.59.150 k8snode2配置参考127.0.0.1 也要加上当前主机名 测试 1.2 关闭selinux和firewalld systemctl stop firewalld systemctl disable firewalld sed -i s/enforcing/disabled/ /etc/selinux/config setenforce 01.3 禁止swap分区 swapoff -a1.4 将桥接的IPv4流量传递到iptables的链 cat /etc/sysctl.d/k8s.conf EOF net.ipv4.ip_forward 1 net.ipv4.tcp_tw_recycle 0 net.bridge.bridge-nf-call-ip6tables 1 net.bridge.bridge-nf-call-iptables 1 EOFsysctl --system2、安装部署docker 安装推荐文章 Linux环境下docker安装 简单的docker安装 yum install ca-certificates curl -y yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y配置参考 vim /etc/docker/daemon.json { registry-mirrors: [https://hub-mirror.c.163.com,https://registry.aliyuncs.com,https://registry.docker-cn.com,https://docker.mirrors.ustc.edu.cn], data-root: /data/docker, exec-opts: [native.cgroupdriversystemd], log-driver: json-file, log-opts: { max-size: 300m,max-file: 3 }, live-restore: true }#查看是否启动 service docker status service docker start #启动 #设置开机自启 systemctl enable docker systemctl restart docker systemctl status docker#基本信息 docker infodocker-compose安装 docker-compose版本要自己去github看 containerd 配置文件参考 vim /etc/containerd/conf.tomldisabled_plugins [] imports [] oom_score 0 plugin_dir required_plugins [] root /var/lib/containerd state /run/containerd temp version 2[cgroup]path [debug]address format gid 0level uid 0[grpc]address /run/containerd/containerd.sockgid 0max_recv_message_size 16777216max_send_message_size 16777216tcp_address tcp_tls_ca tcp_tls_cert tcp_tls_key uid 0[metrics]address grpc_histogram false[plugins][plugins.io.containerd.gc.v1.scheduler]deletion_threshold 0mutation_threshold 100pause_threshold 0.02schedule_delay 0sstartup_delay 100ms[plugins.io.containerd.grpc.v1.cri]device_ownership_from_security_context falsedisable_apparmor falsedisable_cgroup falsedisable_hugetlb_controller truedisable_proc_mount falsedisable_tcp_service trueenable_selinux falseenable_tls_streaming falseenable_unprivileged_icmp falseenable_unprivileged_ports falseignore_image_defined_volumes falsemax_concurrent_downloads 3max_container_log_line_size 16384netns_mounts_under_state_dir falserestrict_oom_score_adj falsesandbox_image registry.aliyuncs.com/google_containers/pause:3.6selinux_category_range 1024stats_collect_period 10stream_idle_timeout 4h0m0sstream_server_address 127.0.0.1stream_server_port 0systemd_cgroup falsetolerate_missing_hugetlb_controller trueunset_seccomp_profile [plugins.io.containerd.grpc.v1.cri.cni]bin_dir /opt/cni/binconf_dir /etc/cni/net.dconf_template ip_pref max_conf_num 1[plugins.io.containerd.grpc.v1.cri.containerd]default_runtime_name runcdisable_snapshot_annotations truediscard_unpacked_layers falseignore_rdt_not_enabled_errors falseno_pivot falsesnapshotter overlayfs[plugins.io.containerd.grpc.v1.cri.containerd.default_runtime]base_runtime_spec cni_conf_dir cni_max_conf_num 0container_annotations []pod_annotations []privileged_without_host_devices falseruntime_engine runtime_path runtime_root runtime_type [plugins.io.containerd.grpc.v1.cri.containerd.default_runtime.options][plugins.io.containerd.grpc.v1.cri.containerd.runtimes][plugins.io.containerd.grpc.v1.cri.containerd.runtimes.runc]base_runtime_spec cni_conf_dir cni_max_conf_num 0container_annotations []pod_annotations []privileged_without_host_devices falseruntime_engine runtime_path runtime_root runtime_type io.containerd.runc.v2[plugins.io.containerd.grpc.v1.cri.containerd.runtimes.runc.options]BinaryName CriuImagePath CriuPath CriuWorkPath IoGid 0IoUid 0NoNewKeyring falseNoPivotRoot falseRoot ShimCgroup SystemdCgroup true[plugins.io.containerd.grpc.v1.cri.containerd.untrusted_workload_runtime]base_runtime_spec cni_conf_dir cni_max_conf_num 0container_annotations []pod_annotations []privileged_without_host_devices falseruntime_engine runtime_path runtime_root runtime_type [plugins.io.containerd.grpc.v1.cri.containerd.untrusted_workload_runtime.options][plugins.io.containerd.grpc.v1.cri.image_decryption]key_model node[plugins.io.containerd.grpc.v1.cri.registry]config_path [plugins.io.containerd.grpc.v1.cri.registry.auths][plugins.io.containerd.grpc.v1.cri.registry.configs][plugins.io.containerd.grpc.v1.cri.registry.configs.k8smaster:5000.tls]insecure_skip_verify true[plugins.io.containerd.grpc.v1.cri.registry.headers][plugins.io.containerd.grpc.v1.cri.registry.mirrors][plugins.io.containerd.grpc.v1.cri.registry.mirrors.k8smaster:5000]endpoint [http://k8smaster:5000][plugins.io.containerd.grpc.v1.cri.x509_key_pair_streaming]tls_cert_file tls_key_file [plugins.io.containerd.internal.v1.opt]path /opt/containerd[plugins.io.containerd.internal.v1.restart]interval 10s[plugins.io.containerd.internal.v1.tracing]sampling_ratio 1.0service_name containerd[plugins.io.containerd.metadata.v1.bolt]content_sharing_policy shared[plugins.io.containerd.monitor.v1.cgroups]no_prometheus false[plugins.io.containerd.runtime.v1.linux]no_shim falseruntime runcruntime_root shim containerd-shimshim_debug false[plugins.io.containerd.runtime.v2.task]platforms [linux/amd64]sched_core false[plugins.io.containerd.service.v1.diff-service]default [walking][plugins.io.containerd.service.v1.tasks-service]rdt_config_file [plugins.io.containerd.snapshotter.v1.aufs]root_path [plugins.io.containerd.snapshotter.v1.btrfs]root_path [plugins.io.containerd.snapshotter.v1.devmapper]async_remove falsebase_image_size discard_blocks falsefs_options fs_type pool_name root_path [plugins.io.containerd.snapshotter.v1.native]root_path [plugins.io.containerd.snapshotter.v1.overlayfs]root_path upperdir_label false[plugins.io.containerd.snapshotter.v1.zfs]root_path [plugins.io.containerd.tracing.processor.v1.otlp]endpoint insecure falseprotocol [proxy_plugins][stream_processors][stream_processors.io.containerd.ocicrypt.decoder.v1.tar]accepts [application/vnd.oci.image.layer.v1.tarencrypted]args [--decryption-keys-path, /etc/containerd/ocicrypt/keys]env [OCICRYPT_KEYPROVIDER_CONFIG/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf]path ctd-decoderreturns application/vnd.oci.image.layer.v1.tar[stream_processors.io.containerd.ocicrypt.decoder.v1.tar.gzip]accepts [application/vnd.oci.image.layer.v1.targzipencrypted]args [--decryption-keys-path, /etc/containerd/ocicrypt/keys]env [OCICRYPT_KEYPROVIDER_CONFIG/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf]path ctd-decoderreturns application/vnd.oci.image.layer.v1.targzip[timeouts]io.containerd.timeout.bolt.open 0sio.containerd.timeout.shim.cleanup 5sio.containerd.timeout.shim.load 5sio.containerd.timeout.shim.shutdown 3sio.containerd.timeout.task.state 2s[ttrpc]address gid 0uid 03、部署k8s基础命令 3.1 添加k8s阿里云的yum源 cat /etc/yum.repos.d/kubernetes.repo EOF[kubernetes] nameKubernetes baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled1 gpgcheck0 repo_gpgcheck0 gpgkeyhttps://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF或者用vim vim /etc/yum.repos.d/kubernetes.repo [kubernetes] nameKubernetes baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled1 gpgcheck0 repo_gpgcheck0 gpgkeyhttps://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg3.2 查看最新可安装的软件 yum --disablerepo* --enablerepokubernetes list available3.3 安装kubeadm、kubectl、kubelet 我这里装的版本是1.28.2 yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2 systemctl start kubelet systemctl enable kubelet#查看错误日志 journalctl -u kubelet4、部署集群 查询各个组件的版本 kubeadm config images list4.1 初始化master 只要在主节点上执行 kubeadm init --kubernetes-version1.28.13 \ --apiserver-advertise-address192.168.59.148 \ --image-repository registry.aliyuncs.com/google_containers \ --service-cidr10.140.0.0/16 \ --pod-network-cidr10.244.0.0/16参数注释 –apiserver-advertise-address 指明用Master的哪个interface与Cluster 的其他节点通信。 如果Master有多个interface 建议明确指定 如果 不指定 kubeadm会自动选择有默认网关的interface。 –pod-network-cidr 选择一个Pod网络插件并检查它是否需要在初始化Master时指定一些参数它的值取决于你在下一步选择的哪个网络网络插件这里选择Flannel的网络插件参数为 10.244.0.0/16。Calico网络为192.168.0.0/16。参考Installing a pod network add-on -service-cidr ​ 选择service网络 –image-repository 使用kubeadm config images pull来预先拉取初始化需要用到的镜像用来检查是否能连接到Kubenetes的Registries。Kubenetes默认Registries地址是k8s.gcr.io很明显在国内并不能访问gcr.io因此在kubeadm v1.13之前的版本安装起来非常麻烦但是在1.13版本中终于解决了国内的痛点其增加了一个–image-repository参数默认值是k8s.gcr.io我们将其指定为国内镜像地址registry.aliyuncs.com/google_containers。 –kubernetes-version 默认值是stable-1会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号我们可以将其指定为固定版本来跳过网络请求。 4.2 报错以及问题处理 查看报错命令 journalctl -xeu kubelet 问题一 node节点也要注释掉 [init] Using Kubernetes version: v1.28.13 [preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR CRI]: container runtime is not running: output: time“2024-09-12T14:01:0308:00” levelfatal msg“validate service connection: CRI v1 runtime API is not implemented for endpoint “unix:///var/run/containerd/containerd.sock”: rpc error: code Unimplemented desc unknown service runtime.v1.RuntimeService” , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors... To see the stack trace of this error execute with --v5 or higher 查看版本没问题看看有没有启动 [rootlocalhost home]# containerd -v containerd containerd.io 1.6.33 d2d58213f83a351ca8f528a95fbd145f5654e957 [rootlocalhost home]# docker -v Docker version 26.1.4, build 5650f9b 编辑以下文件将下面一行内容注释掉 vim /etc/containerd/config.toml #disabled_plugins [“cri”] 原因containerd安装的默认禁用重点 使用安装包安装的containerd会默认禁用作为容器运行时的功能即安装包安装containerd后默认禁用containerd作为容器运行时 这个时候使用k8s就会报错了因为没有容器运行时可以用 开启方法就是将/etc/containerd/config.toml文件中的disabled_plugins的值的列表中不包含cri 修改后重启containerd才会生效 systemctl restart containerd 问题二 如果kubernets初始化时失败后第二次再次执行会初始化命令会报错这时需要进行重置 [rootlocalhost home]# kubeadm init --kubernetes-version1.28.13 --apiserver-advertise-address192.168.59.148 --image-repository registry.aliyuncs.com/google_containers --service-cidr10.140.0.0/16 --pod-network-cidr10.244.0.0/16 [init] Using Kubernetes version: v1.28.13 [preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Port-6443]: Port 6443 is in use [ERROR Port-10259]: Port 10259 is in use [ERROR Port-10257]: Port 10257 is in use [ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists [ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists [ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists [ERROR Port-10250]: Port 10250 is in use [ERROR Port-2379]: Port 2379 is in use [ERROR Port-2380]: Port 2380 is in use [ERROR DirAvailable–var-lib-etcd]: /var/lib/etcd is not empty [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors... To see the stack trace of this error execute with --v5 or higher 解决方法 kubeadm reset问题三 驱动加载 这个问题我没遇到 执行下面这两个命令 modprobe br_netfilter bridge问题四 Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - ‘systemctl status kubelet’ - ‘journalctl -xeu kubelet’ Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause’ Once you have found the failing container, you can inspect its logs with: - ‘crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID’ error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster To see the stack trace of this error execute with --v5 or higher 使用 journalctl -xeu kubelet 查看报错 failed to resolve reference \registry.k8s.io/pause:3.6 解决方法 #生成 containerd 的默认配置文件 containerd config default /etc/containerd/config.toml #查看 sandbox 的默认镜像仓库在文件中的第几行 cat /etc/containerd/config.toml | grep -n “sandbox_image” #使用 vim 编辑器 定位到 sandbox_image将 仓库地址修改成 registry.aliyuncs.com/google_containers/pause:3.6 vim /etc/containerd/config.toml sandbox_image “registry.aliyuncs.com/google_containers/pause:3.6” #重启 containerd 服务 systemctl daemon-reload systemctl restart containerd.service 记得要 kubeadm reset 4.3执行成功 Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster. Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.59.148:6443 --token 3otopj.v2r7x7gcpa4j1tv3 \--discovery-token-ca-cert-hash sha256:b881ce5117a2ed28cb4f86963b462cc77976194c33c9314dbf4647f011354dc1初始化完成后会生成一串命令用于node节点的加入 4.4关于token token一般24小时候就会过期 查看当前token [rootlocalhost home]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 3otopj.v2r7x7gcpa4j1tv3 23h 2024-09-13T06:41:42Z authentication,signing The default bootstrap token generated by kubeadm init. system:bootstrappers:kubeadm:default-node-token 查看本机sha256值 openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der重新生成token kubeadm token create重新生成token并打印出join命令 kubeadm token create --print-join-command如果要加入master节点需要先生成certificate-key1.16版本前参数为–experimental-upload-certs1.16及1.16版本以后为–upload-certs kubeadm init phase upload-certs --upload-certs结合上面join和certs的同样1.16版本前参数为–experimental-control-plane --certificate-key 1.16及1.16版本以后为–control-plane --certificate-key kubeadm join 192.168.59.148:6443 --token fpjwdf.p9bnbqf7cpvf1amc --discovery-token-ca-cert-hash sha256:dd3cb5208a4ca032e85a5a30b9b02f963aff2fece13045cf8c74d7b9ed7f6098 --control-plane --certificate-key 820908fa5d83b9a7314a58147b80d0dc81b4f7469c9c8f72fb49b4fba2652c29 4.5配置kubectl 执行上面返回的命令 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configroot用户执行永久生效 echo export KUBECONFIG/etc/kubernetes/admin.conf /etc/profile.d/kubeconfig.sh source /etc/profile.d/kubeconfig.sh不然就临时生效 export KUBECONFIG/etc/kubernetes/admin.conf将admin.conf拷贝到其他需要使用kunectl命令的node节点 scp /etc/kubernetes/admin.conf root192.168.59.149:/etc/kubernetes/ scp /etc/kubernetes/admin.conf root192.168.59.150:/etc/kubernetes/一样执行生效 export KUBECONFIG/etc/kubernetes/admin.conf 或者 echo “export KUBECONFIG/etc/kubernetes/admin.conf” /etc/profile.d/kubeconfig.sh source /etc/profile.d/kubeconfig.sh 4.6加入节点 在除master外其他node节点执行上面的join命令加入k8s集群 kubeadm join 192.168.59.148:6443 --token 3otopj.v2r7x7gcpa4j1tv3 --discovery-token-ca-cert-hash sha256:b881ce5117a2ed28cb4f86963b462cc77976194c33c9314dbf4647f011354dc1加入成功 [rootlocalhost home]# kubeadm join 192.168.59.148:6443 --token 3otopj.v2r7x7gcpa4j1tv3 --discovery-token-ca-cert-hash sha256:b881ce5117a2ed28cb4f86963b462cc77976194c33c9314dbf4647f011354dc1 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -o yaml [kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml [kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.Run kubectl get nodes on the control-plane to see this node join the cluster. 查看节点 [rootlocalhost home]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster NotReady control-plane 32m v1.28.2 k8snode1 NotReady none 13s v1.28.2 k8snode2 NotReady none 5s v1.28.24.7移除节点node 不移除的可以直接下一步 [rootlocalhost flanneld]# kubectl drain k8snode2 --delete-local-data --force --ignore-daemonsets Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data. node/k8snode2 cordoned Warning: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-p8cxh evicting pod tigera-operator/tigera-operator-748c69cf45-9clh2pod/tigera-operator-748c69cf45-9clh2 evicted node/k8snode2 drained [rootlocalhost flanneld]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster Ready control-plane 3h13m v1.28.2 k8snode1 NotReady none 161m v1.28.2 k8snode2 NotReady,SchedulingDisabled none 161m v1.28.2 [rootlocalhost flanneld]# [rootlocalhost flanneld]# kubectl delete node k8snode2 node k8snode2 deleted [rootlocalhost flanneld]# pwd /data/flanneld [rootlocalhost flanneld]# cd /etc/kubernetes/ [rootlocalhost kubernetes]# ll 总用量 32 -rw-------. 1 root root 5650 9月 12 14:41 admin.conf -rw-------. 1 root root 5682 9月 12 14:41 controller-manager.conf -rw-------. 1 root root 1982 9月 12 14:41 kubelet.conf drwxr-xr-x. 2 root root 113 9月 12 14:41 manifests drwxr-xr-x. 3 root root 4096 9月 12 14:41 pki -rw-------. 1 root root 5626 9月 12 14:41 scheduler.conf [rootlocalhost kubernetes]# kubeadm reset -f [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -o yaml [preflight] Running pre-flight checks [reset] Deleted contents of the etcd data directory: /var/lib/etcd [reset] Stopping the kubelet service [reset] Unmounting mounted directories in /var/lib/kubelet [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the iptables command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your systems IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file. [rootlocalhost kubernetes]# ls manifests pki# 重新加入 上面的 kubeadm join 5、安装CNI网络插件 master上执行安装flannel 网络插件 下载yaml文件网咯会有波动可以多wget几次 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml注意net-conf.json的Network配置 要是上面init pod-network-cidr的网段地址 vim kube-flannel.yml 安装插件 kubectl apply -f kube-flannel.ymlkubectl get pods -n kube-flannel kubectl get nodes出现问题 网络实际没连上 k8s flannel网络插件国内镜像docker拉取不到 docker.io/flannel/flannel:v0.25.6 解决方案手动到github下载然后docker构建 下载这两个 根据kube-flannel.yml里面的版本去找 flannel:v0.25.6 flannel-cni-plugin:v1.5.1-flannel2 [rootlocalhost flanneld]# docker import flannel-v0.25.6-linux-amd64.tar.gz flannel/flannel:v0.25.6 sha256:5c76b00ff15dfc6d452f1dcce31d7508e13363c9ab9beeddd90dd1a6204fcab8 [rootlocalhost flanneld]# docker import cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz flannel/flannel-cni-plugin:v1.5.1-flannel2 sha256:fd42d9ebb5885a5889bb0211e560b04b18dab401e3b63e777d4d1f358a847df6 构建成功会有两个包 将这两个打成tar包 [rootlocalhost flanneld]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE flannel/flannel-cni-plugin v1.5.1-flannel2 fd42d9ebb588 12 minutes ago 2.54MB flannel/flannel v0.25.6 5c76b00ff15d 12 minutes ago 42.8MB [rootlocalhost flanneld]# docker save flannel/flannel:v0.25.6 cowardly refusing to save to a terminal. Use the -o flag or redirect [rootlocalhost flanneld]# docker save flannel/flannel:v0.25.6 -o flannel-v0.25.6.tar [rootlocalhost flanneld]# ll 总用量 55832 -rw-r--r--. 1 root root 1080975 9月 12 16:30 cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz -rw-r--r--. 1 root root 13305488 9月 12 16:15 flannel-v0.25.6-linux-amd64.tar.gz -rw-------. 1 root root 42772992 9月 12 16:55 flannel-v0.25.6.tar -rw-r--r--. 1 root root 4345 9月 12 15:41 kube-flannel.yml [rootlocalhost flanneld]# docker save flannel/flannel-cni-plugin:v1.5.1-flannel2 -o cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tar [rootlocalhost flanneld]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE flannel/flannel-cni-plugin v1.5.1-flannel2 fd42d9ebb588 14 minutes ago 2.54MB flannel/flannel v0.25.6 5c76b00ff15d 15 minutes ago 42.8MB [rootlocalhost flanneld]# ll 总用量 58336 -rw-------. 1 root root 2560512 9月 12 16:56 cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tar -rw-r--r--. 1 root root 1080975 9月 12 16:30 cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz -rw-r--r--. 1 root root 13305488 9月 12 16:15 flannel-v0.25.6-linux-amd64.tar.gz -rw-------. 1 root root 42772992 9月 12 16:55 flannel-v0.25.6.tar -rw-r--r--. 1 root root 4345 9月 12 15:41 kube-flannel.yml [rootlocalhost flanneld]# 将tar镜像压缩包导入到containerd的k8s.io命名空间中 [rootlocalhost flanneld]# ll 总用量 58336 -rw-------. 1 root root 2560512 9月 12 16:56 cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tar -rw-r--r--. 1 root root 1080975 9月 12 16:30 cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz -rw-r--r--. 1 root root 13305488 9月 12 16:15 flannel-v0.25.6-linux-amd64.tar.gz -rw-------. 1 root root 42772992 9月 12 16:55 flannel-v0.25.6.tar -rw-r--r--. 1 root root 4345 9月 12 15:41 kube-flannel.yml [rootlocalhost flanneld]# sudo ctr -n k8s.io images import cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tar unpacking docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2 (sha256:2e67e1ceda143a11deca57c0bd3145c9a1998d78d1084e3028c26ae6ceea233f)...done [rootlocalhost flanneld]# sudo ctr -n k8s.io images import flannel-v0.25.6.tar unpacking docker.io/flannel/flannel:v0.25.6 (sha256:7dcf8fbbc9e9acbe2e5e3e7321b74aa357a5f4246152f6539da903370fc3f999)...done [rootlocalhost flanneld]# 检查是否成功 sudo ctr -n k8s.io i check | grep flannel然后修改 kube-flannel.yml 文件 --- kind: Namespace apiVersion: v1 metadata:name: kube-flannellabels:k8s-app: flannelpod-security.kubernetes.io/enforce: privileged --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:labels:k8s-app: flannelname: flannel rules: - apiGroups:- resources:- podsverbs:- get - apiGroups:- resources:- nodesverbs:- get- list- watch - apiGroups:- resources:- nodes/statusverbs:- patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:labels:k8s-app: flannelname: flannel roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel subjects: - kind: ServiceAccountname: flannelnamespace: kube-flannel --- apiVersion: v1 kind: ServiceAccount metadata:labels:k8s-app: flannelname: flannelnamespace: kube-flannel --- kind: ConfigMap apiVersion: v1 metadata:name: kube-flannel-cfgnamespace: kube-flannellabels:tier: nodek8s-app: flannelapp: flannel data:cni-conf.json: |{name: cbr0,cniVersion: 0.3.1,plugins: [{type: flannel,delegate: {hairpinMode: true,isDefaultGateway: true}},{type: portmap,capabilities: {portMappings: true}}]}net-conf.json: |{Network: 10.244.0.0/16,EnableNFTables: false,Backend: {Type: vxlan}} --- apiVersion: apps/v1 kind: DaemonSet metadata:name: kube-flannel-dsnamespace: kube-flannellabels:tier: nodeapp: flannelk8s-app: flannel spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cni-pluginimage: docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2imagePullPolicy: Nevercommand:- cpargs:- -f- /flannel- /opt/cni/bin/flannelvolumeMounts:- name: cni-pluginmountPath: /opt/cni/bin- name: install-cniimage: docker.io/flannel/flannel:v0.25.6imagePullPolicy: Nevercommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: docker.io/flannel/flannel:v0.25.6imagePullPolicy: Nevercommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: 100mmemory: 50MisecurityContext:privileged: falsecapabilities:add: [NET_ADMIN, NET_RAW]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: 5000volumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/- name: xtables-lockmountPath: /run/xtables.lockvolumes:- name: runhostPath:path: /run/flannel- name: cni-pluginhostPath:path: /opt/cni/bin- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate先删除 kubectl delete -f kube-flannel.yml再构建 kubectl apply -f kube-flannel.yml还是失败了最后找了个大佬的github 编辑kube-flannel.yml 加上 m.daocloud.io/ 前缀 [rootk8smaster flanneld]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster Ready control-plane 19h v1.28.2 [rootk8smaster flanneld]# kubectl get pods -n kube-flannel NAME READY STATUS RESTARTS AGE kube-flannel-ds-g8mng 1/1 Running 0 8m52s 卸载命令 kubectl delete -f kube-flannel.yml 安装calico 我这里直接 kubeadm reset 了 重新来一遍master和node节点都resetinit的时候–pod-network-cidr192.168.0.0/16calico是192.168.0.0 calico官方地址 kubectl create -f https://raw.gitmirror.com/projectcalico/calico/v3.27.2/manifests/tigera-operator.yamlwget https://raw.gitmirror.com/projectcalico/calico/v3.27.2/manifests/custom-resources.yaml vim custom-resources.yaml #把里边cidrip 更换根据你pod-network-cidr的参数更换 cidr: 10.244.0.0/16构建 kubectl create -f custom-resources.yaml 查看 kubectl get pod -A最后还是不行启动不了也是网络问题
http://www.dnsts.com.cn/news/118852.html

相关文章:

  • 网站制作 合同网站制作和如何推广
  • 郑州专业做网站的制作网络游戏
  • 电竞网站建设方案建网站资料
  • 衡水做外贸网站win7系统可以做网站吗
  • 福州市做公司网站哪家好网易公司邮箱
  • 西安php网站开发培训班wordpress搭建discuz
  • 学生做网站教程金融互助网站建设
  • php做网站登陆验证下载58同城网招聘找工作
  • 个人游戏网站备案河北省住房和城乡建设部网站
  • 网站制作毕业设计本地的南通网站建设
  • 360网站提交网页设计总结报告500字
  • 知名网站服务器wordpress字体更改
  • 2017年网站推广怎么做2024年小微企业100至300万
  • 向网站上传文件怎么做seo和sem的区别是什么
  • 宛城区微网站开发wordpress登陆函数
  • python 做的网站有哪些辽宁省网站制作公司排名
  • iis6cgi php网站缓存asp.net 价格查询网站
  • 甘肃省建设工程网站南昌vr网站开发
  • 自助建站免费自助建站网站免费外网服务器ip地址
  • 电子商务网站建设的总体目标策划方案模板
  • 太空为什么要建站网站建设愿景
  • 购物网站开发周期北京专业做网站公司哪家好
  • 如何做网站电话建站公司不给源码
  • 做外贸的网站要多少钱删除 感谢使用wordpress进行创作
  • 佛山网站建设公司如何组建网站建设价格差别
  • 营销型网站建设公司推荐小微企业2022年税收优惠政策
  • 无锡网站建设推广公司爱玖货源站
  • 佛山外贸网站建设公司如何做网站的映射
  • 阳春市住房规划建设局网站制作html网页的软件
  • 聚美优品的pc网站建设潍坊住房公积金贷款额度