当前位置: 首页 > news >正文

网站什么做的企业所得税怎么算利润

网站什么做的,企业所得税怎么算利润,慈溪住房和城乡建设部网站,公司网站做百度推广需要交费吗目录 一、环境准备 2.1、主机配置 2.2、安装 Docker 2.3、生成通信加密证书 2.3.1、生成 CA 证书#xff08;所有主机操作#xff09; 2.3.2、生成 Server 证书#xff08;所有主机#xff09; 2.3.3、生成 admin 证书(所有主机) 2.3.4、生成 proxy 证书 三、部署 … 目录 一、环境准备 2.1、主机配置 2.2、安装 Docker 2.3、生成通信加密证书 2.3.1、生成 CA 证书所有主机操作 2.3.2、生成 Server 证书所有主机 2.3.3、生成 admin 证书(所有主机) 2.3.4、生成 proxy 证书 三、部署 Etcd 集群 3.1、在 k8s-master主机上部署 Etcd 节点 3.2、在k8s-node01 、k8s-node02 主机上部署 Etcd 节点 3.3、查看 Etcd 集群部署状况 四、部署 Flannel 网络 4.1、分配子网段到 Etcd 4.2、配置 Flannel 4.3、启动Flannel 4.4、测试 Flanneld 是否安装成功 五、部署 Kubernetes-master 组件 5.1、添加 kubectl 命令环境 5.2、创建 TLS Bootstrapping Token 5.3、创建 Kubelet kubeconfig 5.4、创建 kube-proxy kubeconfig 5.5、部署 Kube-apiserver 5.6、部署 Kube-controller-manager 5.7、部署 Kube-scheduler 5.8、组件运行是否正常 六、部署 Kubernetes-node 组件 6.1、准备环境 6.2、部署 kube-kubelet 6.3、部署 kube-proxy 6.4、查看 Node 节点组件是否安装成功 6.5、查看自动签发证书 七、以Deployment方式创建Nginx服务 一、环境准备 二进制所需源码包提取链接https://pan.baidu.com/s/1LHnJjn4mbG0dRoDzChVIfg?pwduz4m 提取码uz4m 操作系统 IP地址 主机名 组件 CentOS7.x 192.168.2.116 k8s-master CentOS7.x 192.168.2.117 k8s-node1 CentOS7.x 192.168.2.118 k8s-node2 注意所有主机配置推荐CPU2C  Memory2G  2.1、主机配置 为三台主机分别设置主机名 [rootlocalhost ~]# hostname k8s-master [rootlocalhost ~]# bash [rootk8s-master ~]#[rootlocalhost ~]# hostname k8s-node1 [rootlocalhost ~]# bash [rootk8s-node1 ~]# [rootlocalhost ~]# hostname k8s-node2 [rootlocalhost ~]# bash [rootk8s-node2 ~]# 在三台主机上修改 hosts 文件添加地址解析记录 [rootk8s-master ~]# cat EOF /etc/hosts 192.168.2.116 k8s-master 192.168.2.117 k8s-node1 192.168.2.118 k8s-node2 EOF[rootk8s-master ~]# scp /etc/hosts 192.168.2.117:/etc/ [rootk8s-master ~]# scp /etc/hosts 192.168.2.118:/etc/ 2.2、安装 Docker 在所有主机上安装并配置 Docker [rootk8s-master ~]# yum -y install iptable* wget telnet lsof vim rsync lrzsz net-tools unzip[rootk8s-master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo [rootk8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2[rootk8s-master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [rootk8s-master ~]# yum clean all yum makecache fast[rootk8s-master ~]# yum -y install docker-ce [rootk8s-master ~]# systemctl start docker [rootk8s-master ~]# systemctl enable docker[rootk8s-master ~]# cat EOF /etc/docker/daemon.json {registry-mirrors: [https://dockerhub.azk8s.cn,https://hub-mirror.c.163.com] } EOF[rootk8s-master ~]# systemctl daemon-reload [rootk8s-master ~]# systemctl restart docker K8S 创建容器时需要生成 iptables 规则需要将 CentOS默认的 Firewalld 防火墙 换成 iptables。在所有主机上设置防火墙 [rootk8s-master ~]# systemctl stop firewalld [rootk8s-master ~]# systemctl disable firewalld [rootk8s-master ~]# systemctl start iptables [rootk8s-master ~]# iptables -F [rootk8s-master ~]# iptables -I INPUT -s 192.168.2.0/24 -j ACCEPT 禁用Selinux [rootk8s-master ~]# sed -i /^SELINUX/s/enforcing/disabled/ /etc/selinux/config [rootk8s-master ~]# setenforce 02.3、生成通信加密证书 Kubernetes 系统各组件之间需要使用 TLS 证书对通信进行加密本实验使用CloudFlare 的 PKI 工具集 CFSSL 来生成 Certificate Authority 和其他证书。所有主机操作 Kubernetes工具提取链接https://pan.baidu.com/s/16GaKmbCBjWr8ZIAf3QCYNQ?pwd62fn 提取码62fn [rootk8s-master ~]# tar xzf kubernetes-server-linux-amd64.tar.gz 2.3.1、生成 CA 证书所有主机操作 ca证书工具提取链接https://pan.baidu.com/s/1HY_5YXpyFO9OKagyjeq2NA?pwdzvi3 提取码zvi3 执行以下操作创建证书存放位置并安装证书生成工具。 [rootk8s-master ~]# cd /usr/local/bin/[rootk8s-master bin]# rz #上传工具[rootk8s-master bin]# mv cfssl_linux-amd64 ./cfssl[rootk8s-master bin]# mv cfssljson_linux-amd64 ./cfssljion[rootk8s-master bin]# mv cfssl-certinfo_linux-amd64 ./cfssl-certinfo[rootk8s-master bin]# chmod x ./*[rootk8s-master bin]# ll总用量 18808 -rwxr-xr-x. 1 root root 10376657 7月 9 2020 cfssl -rwxr-xr-x. 1 root root 6595195 7月 9 2020 cfssl-certinfo -rwxr-xr-x. 1 root root 2277873 7月 9 2020 cfssljion[rootk8s-master ~]# cfssl --helpUsage: Available commands:ocsprefreshscangenkeyocspdumpocspsignocspservesignservegencertselfsignrevokecertinfoversioninfoprint-defaultsbundlegencrl Top-level flags:-allow_verification_with_non_compliant_keysAllow a SignatureVerifier to use keys which are technically non-compliant with RFC6962.-loglevel intLog level (0 DEBUG, 5 FATAL) (default 1)执行以下命令拷贝证书生成脚本。 [rootk8s-master ~]# cat EOF ca-config.json{signing: {default: {expiry: 87600h},profiles: {kubernetes: {expiry: 87600h,usages: [signing,key encipherment,server auth,client auth]}}}}EOF[rootk8s-master ~]# cat EOF ca-csr.json{CN: kubernetes,key: {algo: rsa,size: 2048},names: [{C: CN,L: Beijing,ST: Beijing,O: k8s,OU: System}]}EOF执行以下操作生成 CA 证书。 [rootk8s-master ~]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -2023/08/10 19:44:09 [INFO] generating a new CA key and certificate from CSR 2023/08/10 19:44:09 [INFO] generate received request 2023/08/10 19:44:09 [INFO] received CSR 2023/08/10 19:44:09 [INFO] generating key: rsa-2048 2023/08/10 19:44:09 [INFO] encoded CSR 2023/08/10 19:44:09 [INFO] signed certificate with serial number 2324081710827061226687240824835277076643143572772.3.2、生成 Server 证书所有主机 执行以下操作创建 kubernetes-csr.json 文件并生成 Server 证书。文件中配置的 IP地址是使用该证书的主机 IP 地址根据实际的实验环境填写。其中 10.10.10.1 是 kubernetes 自带的 Service。 [rootk8s-master ~]# vim /etc/server-csr.json{CN: kubernetes,hosts: [127.0.0.1,192.168.2.116,192.168.2.117,192.168.2.118,10.10.10.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing,O: k8s,OU: System}] }[rootk8s-master ~]# cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes server-csr.json | cfssljson -bare server2023/08/10 19:57:50 [INFO] generate received request 2023/08/10 19:57:50 [INFO] received CSR 2023/08/10 19:57:50 [INFO] generating key: rsa-2048 2023/08/10 19:57:50 [INFO] encoded CSR 2023/08/10 19:57:50 [INFO] signed certificate with serial number 424188719705968634905526760201201991499922096108 2023/08/10 19:57:50 [WARNING] This certificate lacks a hosts field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 (Information Requirements).2.3.3、生成 admin 证书(所有主机) 执行以下操作创建 admin-csr.json 文件并生成 admin 证书。 [rootk8s-master ~]# vim admin-csr.json {CN: admin,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing,O: system:masters,OU: System}] }[rootk8s-master ~]# cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes admin-csr.json | cfssljson -bare admin // admin 证书是用于管理员访问集群的证书2023/08/10 20:03:12 [INFO] generate received request 2023/08/10 20:03:12 [INFO] received CSR 2023/08/10 20:03:12 [INFO] generating key: rsa-2048 2023/08/10 20:03:12 [INFO] encoded CSR 2023/08/10 20:03:12 [INFO] signed certificate with serial number 159836210599051633906118237113258532670720286284 2023/08/10 20:03:12 [WARNING] This certificate lacks a hosts field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 (Information Requirements).2.3.4、生成 proxy 证书 执行以下操作创建 kube-proxy-csr.json 文件并生成证书。 [rootk8s-master ~]# vim kube-proxy-csr.json {CN: system:kube-proxy,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing,O: k8s,OU: System}] }[rootk8s-master ~]# cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy2023/08/10 20:05:09 [INFO] generate received request 2023/08/10 20:05:09 [INFO] received CSR 2023/08/10 20:05:09 [INFO] generating key: rsa-2048 2023/08/10 20:05:10 [INFO] encoded CSR 2023/08/10 20:05:10 [INFO] signed certificate with serial number 59446791205648555156331506972188557314618920013 2023/08/10 20:05:10 [WARNING] This certificate lacks a hosts field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 (Information Requirements).[rootk8s-master ~]# ls | grep -v pem | xargs -i rm {} //删除证书以外的 json 文件只保留 pem 证书[rootk8s-master ~]# ll 总用量 32 -rw------- 1 root root 1679 8月 10 20:03 admin-key.pem -rw-r--r-- 1 root root 1399 8月 10 20:03 admin.pem -rw------- 1 root root 1679 8月 10 19:44 ca-key.pem -rw-r--r-- 1 root root 1359 8月 10 19:44 ca.pem -rw------- 1 root root 1679 8月 10 20:05 kube-proxy-key.pem -rw-r--r-- 1 root root 1403 8月 10 20:05 kube-proxy.pem drwxr-xr-x 4 root root 79 2月 12 2020 kubernetes -rw------- 1 root root 1679 8月 10 19:57 server-key.pem -rw-r--r-- 1 root root 1627 8月 10 19:57 server.pem三、部署 Etcd 集群 执行以下操作创建配置文件目录。 [rootk8s-master ~]# mkdir /opt/kubernetes [rootk8s-master ~]# mkdir /opt/kubernetes/{bin,cfg,ssl}上传 etcd-v3.3.18-linux-amd64.tar.gz 软件包并执行以下操作解压 etcd 软件包并拷贝二进制 bin 文件。 [rootk8s-master ~]# tar xf etcd-v3.3.18-linux-amd64.tar.gz [rootk8s-master ~]# cd etcd-v3.3.18-linux-amd64 [rootk8s-master etcd-v3.3.18-linux-amd64]# mv etcd /opt/kubernetes/bin/ [rootk8s-master etcd-v3.3.18-linux-amd64]# mv etcdctl /opt/kubernetes/bin/创建完配置目录并准备好 Etcd 软件安装包后即可配置 Etcd 集群。具体操作如下所示。 3.1、在 k8s-master主机上部署 Etcd 节点 创建 Etcd 配置文件。 [rootk8s-master etcd-v3.3.18-linux-amd64]# vim /opt/kubernetes/cfg/etcd#[Member] ETCD_NAMEetcd01 ETCD_DATA_DIR/var/lib/etcd/default.etcd ETCD_LISTEN_PEER_URLShttps://192.168.2.116:2380 ETCD_LISTEN_CLIENT_URLShttps://192.168.2.116:2379 #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.2.116:2380 ETCD_ADVERTISE_CLIENT_URLShttps://192.168.2.116:2379 ETCD_INITIAL_CLUSTERetcd01https://192.168.2.116:2380,etcd02https://192.168.2.117:2380,etcd03https://192.168.2.118:2380 ETCD_INITIAL_CLUSTER_TOKENetcd-cluster ETCD_INITIAL_CLUSTER_STATEnew创建脚本配置文件。 [rootk8s-master etcd-v3.3.18-linux-amd64]# vim /usr/lib/systemd/etcd.service[Unit] DescriptionEtcd Server Afternetwork.target Afternetwork-online.target Wantsnetwork-online.target [Service] Typenotify EnvironmentFile-/opt/kubernetes/cfg/etcd ExecStart/opt/kubernetes/bin/etcd \ --name${ETCD_NAME} \ --data-dir${ETCD_DATA_DIR} \ --listen-peer-urls${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token${ETCD_INITIAL_CLUSTER} \ --initial-cluster-statenew \ --cert-file/opt/kubernetes/ssl/server.pem \ --key-file/opt/kubernetes/ssl/server-key.pem \ --peer-cert-file/opt/kubernetes/ssl/server.pem \ --peer-key-file/opt/kubernetes/ssl/server-key.pem \ --trusted-ca-file/opt/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file/opt/kubernetes/ssl/ca.pem Restarton-failure LimitNOFILE65536 [Install] WantedBymulti-user.target 拷贝 Etcd 启动所依赖的证书。 [rootk8s-master ~]# lsadmin-key.pem ca-key.pem etcd-v3.3.18-linux-amd64 kube-proxy-key.pem kubernetes server.pem admin.pem ca.pem etcd-v3.3.18-linux-amd64.tar.gz kube-proxy.pem server-key.pem[rootk8s-master ~]# cp ca*.pem /opt/kubernetes/ssl/启动 Etcd 主节点。若主节点启动卡顿直接 ctrl c 终止即可。实际 Etcd 进程已经启动在连接另外两个节点时会超时因为另外两个节点尚未启动。(建议先做下面node节点在启动) [rootk8s-master software]# systemctl start etcd [rootk8s-master software]# systemctl enable etcd Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service. 查看 Etcd 启动结果 [rootk8s-master software]# ps aux | grep etcd root 10755 1.0 1.1 10610764 46032 ? Ssl 14:50 0:01 /opt/kubernetes/bin/etcd --nameetcd01 --data-dir/var/lib/etcd/default.etcd --listen-peer-urlshttps://192.168.2.116:2380 --listen-client-urlshttps://192.168.2.116:2379,http://127.0.0.1:2379 --advertise-client-urlshttps://192.168.2.116:2379 --initial-advertise-peer-urlshttps://192.168.2.116:2380 --initial-clusteretcd01https://192.168.2.116:2380,etcd02https://192.168.2.117:2380,etcd03https://192.168.2.118:2380 --initial-cluster-tokenetcd01https://192.168.2.116:2380,etcd02https://192.168.2.117:2380,etcd03https://192.168.2.118:2380 --initial-cluster-statenew --cert-file/opt/kubernetes/ssl/server.pem --key-file/opt/kubernetes/ssl/server-key.pem --peer-cert-file/opt/kubernetes/ssl/server.pem --peer-key-file/opt/kubernetes/ssl/server-key.pem --trusted-ca-file/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file/opt/kubernetes/ssl/ca.pem root 10798 0.0 0.0 112828 980 pts/1 S 14:53 0:00 grep --colorauto etcd3.2、在k8s-node01 、k8s-node02 主机上部署 Etcd 节点 拷贝 Etcd 配置文件到计算节点主机然后修改对应的主机 IP 地址。 [rootk8s-master ~]# rsync -avcz /opt/kubernetes/* 192.168.2.117:/opt/kubernetes/ root192.168.2.117s password: sending incremental file list bin/ bin/etcd bin/etcdctl bin/default.etcd/ bin/default.etcd/member/ bin/default.etcd/member/snap/ bin/default.etcd/member/snap/db bin/default.etcd/member/wal/ bin/default.etcd/member/wal/0.tmp bin/default.etcd/member/wal/0000000000000000-0000000000000000.wal cfg/ cfg/etcd ssl/ ssl/ca-key.pem ssl/ca.pemsent 14,065,864 bytes received 200 bytes 1,339,625.14 bytes/sec total size is 168,388,923 speedup is 11.97[rootk8s-master ~]# rsync -avcz /opt/kubernetes/* 192.168.2.118:/opt/kubernetes/ root192.168.2.118s password: sending incremental file list bin/ bin/etcd bin/etcdctl bin/default.etcd/ bin/default.etcd/member/ bin/default.etcd/member/snap/ bin/default.etcd/member/snap/db bin/default.etcd/member/wal/ bin/default.etcd/member/wal/0.tmp bin/default.etcd/member/wal/0000000000000000-0000000000000000.wal cfg/ cfg/etcd ssl/ ssl/ca-key.pem ssl/ca.pemsent 14,065,864 bytes received 200 bytes 1,654,831.06 bytes/sec total size is 168,388,923 speedup is 11.97[rootk8s-node1 ~]# vim /opt/kubernetes/cfg/etcd #[Member] ETCD_NAMEetcd02 ETCD_DATA_DIR/var/lib/etcd/default.etcd ETCD_LISTEN_PEER_URLShttps://192.168.2.117:2380 ETCD_LISTEN_CLIENT_URLShttps://192.168.2.117:2379 #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.2.117:2380 ETCD_ADVERTISE_CLIENT_URLShttps://192.168.2.117:2379 ETCD_INITIAL_CLUSTERetcd01https://192.168.2.116:2380,etcd02https://192.168.2.117:2380,etcd03https://192.168.2.118:2380 ETCD_INITIAL_CLUSTER_TOKENetcd-cluster ETCD_INITIAL_CLUSTER_STATEnew [rootk8s-node2 ~]# vim /opt/kubernetes/cfg/etcd #[Member] ETCD_NAMEetcd03 ETCD_DATA_DIR/var/lib/etcd/default.etcd ETCD_LISTEN_PEER_URLShttps://192.168.2.118:2380 ETCD_LISTEN_CLIENT_URLShttps://192.168.2.118:2379 #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.2.118:2380 ETCD_ADVERTISE_CLIENT_URLShttps://192.168.2.118:2379 ETCD_INITIAL_CLUSTERetcd01https://192.168.2.116:2380,etcd02https://192.168.2.117:2380,etcd03https://192.168.2.118:2380 ETCD_INITIAL_CLUSTER_TOKENetcd-cluster ETCD_INITIAL_CLUSTER_STATEnew拷贝启动脚本文件。 [rootk8s-master software]# scp /usr/lib/systemd/system/etcd.service 192.168.2.117:/usr/lib/systemd/system/ root192.168.2.117s password: etcd.service 100% 994 1.8MB/s 00:00 [rootk8s-master software]# scp /usr/lib/systemd/system/etcd.service 192.168.2.118:/usr/lib/systemd/system/ root192.168.2.118s password: etcd.service 100% 994 1.8MB/s 00:00 启动 Node 节点上的 Etcd。 [rootk8s-node1 ~]# systemctl start etcd [rootk8s-node1 ~]# systemctl enable etcd Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service. [rootk8s-node1 ~]# vim /opt/kubernetes/cfg/etcd [rootk8s-node2 ~]# systemctl start etcd [rootk8s-node2 ~]# systemctl enable etcd Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service. [rootk8s-node2 ~]# vim /opt/kubernetes/cfg/etcd 3.3、查看 Etcd 集群部署状况 为 Etcd 命令添加全局环境变量。所有节点都执行。 [rootk8s-master ~]# vim /etc/profileexport PATH$PATH:/opt/kubernetes/bin[rootk8s-master ~]# source /etc/profile 查看 Etcd 集群部署状况。 [rootk8s-master ~]# cd /root/software/ssl/[rootk8s-master ssl]# etcdctl --ca-fileca.pem --cert-fileserver.pem --key-fileserver-key.pem --endpointshttps://192.168.2.116:2379,https://192.168.2.117,https://192.168.2.118:2379 cluster-health member 2e77788f6268c28d is healthy: got healthy result from https://192.168.2.117:2379 member 60b0a20770468ca4 is healthy: got healthy result from https://192.168.2.116:2379 member 980d2d199a3b6f16 is healthy: got healthy result from https://192.168.2.118:2379 cluster is healthy至此完成 Etcd 集群部署。 四、部署 Flannel 网络 Flannel 是 Overlay 网络的一种也是将源数据包封装在另一种网络包里面进行路由转发和通信目前已经支持 UDP、VXLAN、AWS、VPC、和 GCE 路由等数据转发方式。多主机容器网络通信的其他主流方案包括隧道方案Weave、OpenSwitch、路由方案Calico等。 4.1、分配子网段到 Etcd 在主节点写入分配子网段到 Etcd供 Flanneld 使用。 [rootk8s-master ~]# cd /root/software/ssl/[rootk8s-master ssl]# etcdctl -ca-fileca.pem --cert-fileserver.pem --key-fileserver-key.pem --endpointshttps://192.168.2.116:2379,https://192.168.2.117:2379,https://192.168.2.118:2379 set /coreos.com/network/config {Network:172.17.0.0/16,Backend:{Type:vxlan} }{Network:172.17.0.0/16,Backend:{Type:vxlan} }上传 flannel-v0.12.0-linux-amd64.tar.gz 软件包解压 Flannel 二进制并分别拷贝到 Node 节点。 [rootk8s-master ~]# tar xf flannel-v0.12.0-linux-amd64.tar.gz [rootk8s-master ~]# scp flannel mk-docker-opts.sh 192.168.2.117:/opt/kubernetes/bin/ root192.168.2.117s password: flannel: No such file or directory mk-docker-opts.sh 100% 2139 2.6MB/s 00:00 [rootk8s-master ~]# scp flannel mk-docker-opts.sh 192.168.2.118:/opt/kubernetes/bin/ root192.168.2.118s password: flannel: No such file or directory mk-docker-opts.sh 100% 2139 2.2MB/s 00:00 4.2、配置 Flannel 在 k8s-node1 与 k8s-node2 主机上分别编辑 flanneld 配置文件。下面以 k8s-node1 为例进行操作演示。 [rootk8s-node1 ~]# vim /opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS--etcd-endpointshttps://192.168.2.116:2379,https://192.168.2.117:2379,https://192.168.2.118:2379 -etcd-cafile/opt/kubernetes/ssl/ca.pem -etcd-certfile/opt/kubernetes/ssl/server.pem -etcd-keyfile/opt/kubernetes/ssl/server-key.pem[rootk8s-node1 ~]# scp /opt/kubernetes/cfg/flanneld 192.168.2.118:/opt/kubernetes/cfg/flanneld The authenticity of host 192.168.2.118 (192.168.2.118) cant be established. ECDSA key fingerprint is SHA256:Xw4oZiqfBLevo6o1blQqSAQlde5FbnrawBscx/dh0. ECDSA key fingerprint is MD5:fd:e9:93:a2:fe:a1:f1:15:8d:f2:d8:c9:31:35:8c:85. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 192.168.2.118 (ECDSA) to the list of known hosts. root192.168.2.118s password: flanneld 100% 251 443.9KB/s 00:00 在 k8s-node1 与 k8s-node2 主机上分别创建 flanneld.service 脚本文件管理 Flanneld。 [rootk8s-node1 ~]# cat EOF /usr/lib/systemd/system/flanneld.service[Unit]DescriptionFlanneld overlay address etcd agentAfternetwork-online.target network.targetBeforedocker.service[Service]TypenotifyEnvironmentFile/opt/kubernetes/cfg/flanneldExecStart/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONSExecStartPost/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestarton-failure[Install]WantedBymulti-user.targetEOF[rootk8s-node1 ~]# scp /usr/lib/systemd/system/flanneld.service 192.168.2.118:/usr/lib/systemd/system/ root192.168.2.118s password: flanneld.service 100% 398 708.4KB/s 00:00 在 k8s-node01 与 k8s-node02 主机上配置 Docker 启动指定网段修改 Docker 配置脚本文件。 [rootk8s-node01 ~]# vim /usr/lib/systemd/system/docker.serviceEnvironmentFile/run/flannel/subnet.env //新添加[Service]块内目的是让 Docker 网桥分发的 ip 地址与 flanned 网桥在同一个网段ExecStart/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS //添加$ DOCKER_NETWORK_OPTIONS 变量替换原来的 ExecStart目的是调用 Flannel 网桥 IP 地址#ExecStart/usr/bin/dockerd -H fd:// --containerd/run/containerd/containerd.sock 4.3、启动Flannel 启动 k8s-node01和k8s-node02主机上的 Flannel 服务。 [rootk8s-node1 ~]# systemctl start flanneld [rootk8s-node1 ~]# systemctl enable flanneldCreated symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.[rootk8s-node1 ~]# systemctl daemon-reload [rootk8s-node1 ~]# systemctl restart docker[rootk8s-node1 ~]# ifconfig docker0: flags4099UP,BROADCAST,MULTICAST mtu 1500inet 172.17.84.1 netmask 255.255.255.0 broadcast 172.17.84.255ether 02:42:76:ad:ac:bb txqueuelen 0 (Ethernet)RX packets 0 bytes 0 (0.0 B)RX errors 0 dropped 0 overruns 0 frame 0TX packets 0 bytes 0 (0.0 B)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0flannel.1: flags4163UP,BROADCAST,RUNNING,MULTICAST mtu 1450inet 172.17.84.0 netmask 255.255.255.255 broadcast 0.0.0.0inet6 fe80::3058:cff:fe3f:fe1a prefixlen 64 scopeid 0x20linkether 32:58:0c:3f:fe:1a txqueuelen 0 (Ethernet)RX packets 0 bytes 0 (0.0 B)RX errors 0 dropped 0 overruns 0 frame 0TX packets 0 bytes 0 (0.0 B)TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0 4.4、测试 Flanneld 是否安装成功 在 k8s-node02 上测试到 node01 节点 docker0 网桥 IP 地址的连通性出现如下结果说明Flanneld 安装成功。 [rootk8s-node2 ~]# ping 172.17.84.0PING 172.17.84.0 (172.17.84.0) 56(84) bytes of data. 64 bytes from 172.17.84.0: icmp_seq1 ttl64 time0.515 ms 64 bytes from 172.17.84.0: icmp_seq2 ttl64 time0.206 ms 64 bytes from 172.17.84.0: icmp_seq3 ttl64 time0.226 ms ^C --- 172.17.84.0 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev 0.206/0.315/0.515/0.142 ms至此 Node 节点的 Flannel 配置完成。 五、部署 Kubernetes-master 组件 Kubernetes 二进制安装方式所需的二进制安装程序 Google 已经提供了下载可以通过地址 https://github.com/kubernetes/kubernetes/releases 进行下载选择对应的版本之后从 CHANGELOG 页面下载二进制文件。 在 k8s-master 主机上依次进行如下操作部署 Kubernetes-master 组件具体操作如下所示。 5.1、添加 kubectl 命令环境 上传 tar zxf kubernetes-server-linux-amd64.tar.gz 软件包解压并添加 kubectl 命令环境。 [rootk8s-master ~]# tar xf kubernetes-server-linux-amd64.tar.gz [rootk8s-master ~]# cd kubernetes/server/bin/ [rootk8s-master bin]# cp kubectl /opt/kubernetes/bin/5.2、创建 TLS Bootstrapping Token 执行以下命令创建 TLS Bootstrapping Token。 [rootk8s-master bin]# cd /opt/kubernetes/[rootk8s-master kubernetes]# export BOOTSTRAP_TOKEN$(head -c 16 /dev/urandom | od -An -t x | tr -d )[rootk8s-master kubernetes]# cat token.csv EOF${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,system:kubelet-bootstrapEOF5.3、创建 Kubelet kubeconfig 执行以下命令创建 Kubelet kubeconfig。 [rootk8s-master kubernetes]# export KUBE_APISERVERhttps://192.168.2.116:64431设置集群参数 [rootk8s-master kubernetes]# cd /root/software/ssl/[rootk8s-master ssl]# kubectl config set-cluster kubernetes \--certificate-authority./ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfigbootstrap.kubeconfig Cluster kubernetes set.2设置客户端认证参数 [rootk8s-master ssl]# kubectl config set-credentials kubelet-bootstrap --token${BOOTSTRAP_TOKEN} --kubeconfigbootstrap.kubeconfigUser kubelet-bootstrap set.3 设置上下文参数 [rootk8s-master ssl]# kubectl config set-context default \--clusterkubernetes \--userkubelet-bootstrap \--kubeconfigbootstrap.kubeconfigContext default created.4设置默认上下文 [rootk8s-master ssl]# kubectl config use-context default --kubeconfigbootstrap.kubeconfigSwitched to context default.5.4、创建 kube-proxy kubeconfig 执行以下命令创建 kuby-proxy kubeconfig。 [rootk8s-master ssl]# kubectl config set-cluster kubernetes \--certificate-authority./ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfigkube-proxy.kubeconfigCluster kubernetes set.[rootk8s-master ssl]# kubectl config set-credentials kube-proxy \--client-certificate./kube-proxy.pem \--client-key./kube-proxy-key.pem \--embed-certstrue \--kubeconfigkube-proxy.kubeconfigUser kube-proxy set.[rootk8s-master ssl]# kubectl config set-context default \--clusterkubernetes \--userkube-proxy \--kubeconfigkube-proxy.kubeconfigContext default created.[rootk8s-master ssl]# kubectl config use-context default \--kubeconfigkube-proxy.kubeconfigSwitched to context default.5.5、部署 Kube-apiserver 执行以下命令部署 Kube-apiserver。 [rootk8s-master ssl]# cd /root/kubernetes/server/bin/[rootk8s-master bin]# cp kube-controller-manager kube-scheduler kube-apiserver /opt/kubernetes/bin/[rootk8s-master bin]# cp /opt/kubernetes/token.csv /opt/kubernetes/cfg/[rootk8s-master bin]# cd /opt/kubernetes/bin/上传master.zip到当前目录 [rootk8s-master bin]# ./apiserver.sh 192.168.2.116 https://192.168.2.116:2379,https://192.168.2.117:2379,https://192.168.2.118:2379Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service. 5.6、部署 Kube-controller-manager 执行以下命令部署 Kube-controller-manager。 [rootk8s-master bin]# sh controller-manager.sh 127.0.0.1Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.5.7、部署 Kube-scheduler 执行以下命令部署 Kube-scheduler。 [rootk8s-master bin]# sh scheduler.sh 127.0.0.1Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.5.8、组件运行是否正常 执行以下命令检测组件运行是否正常。 [rootk8s-master bin]# kubectl get csNAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {health:true} etcd-2 Healthy {health:true} etcd-1 Healthy {health:true} 六、部署 Kubernetes-node 组件 部署完 Kubernetes-master 组件后即可开始部署 Kubernetes-node 组件。需要依次执行以下步骤。 6.1、准备环境 执行以下命令准备 Kubernetes-node 组件的部署环境。 在 k8s-master 主机上执行 [rootk8s-master ~]# cd /root/software/ssl/[rootk8s-master ssl]# scp *kubeconfig 192.168.2.117:/opt/kubernetes/cfg/ root192.168.2.117s password: bootstrap.kubeconfig 100% 2167 2.6MB/s 00:00 kube-proxy.kubeconfig 100% 6269 8.6MB/s 00:00 [rootk8s-master ssl]# scp *kubeconfig 192.168.2.118:/opt/kubernetes/cfg/ root192.168.2.118s password: bootstrap.kubeconfig 100% 2167 3.1MB/s 00:00 kube-proxy.kubeconfig 100% 6269 7.5MB/s 00:00 [rootk8s-master ssl]# cd /root/kubernetes/server/bin/[rootk8s-master bin]# scp kubelet kube-proxy 192.168.2.117:/opt/kubernetes/bin root192.168.2.117s password: kubelet 100% 106MB 129.4MB/s 00:00 kube-proxy 100% 36MB 134.3MB/s 00:00 [rootk8s-master bin]# scp kubelet kube-proxy 192.168.2.118:/opt/kubernetes/bin root192.168.2.118s password: kubelet 100% 106MB 120.3MB/s 00:00 kube-proxy 100% 36MB 119.5MB/s 00:00 [rootk8s-master bin]# kubectl create clusterrolebinding kubelet-bootstrap \--clusterrolesystem:node-bootstrapper \--userkubelet-bootstrapclusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created[rootk8s-master bin]# kubectl describe clusterrolebinding kubelet-bootstrapName: kubelet-bootstrap Labels: none Annotations: none Role:Kind: ClusterRoleName: system:node-bootstrapper Subjects:Kind Name Namespace---- ---- ---------User kubelet-bootstrap 6.2、部署 kube-kubelet 执行以下命令部署 kubelet。在 k8s-node1、k8s-node2 主机上都要执行 [rootk8s-node1 ~]# cd /opt/kubernetes/bin/上传node.zip [rootk8s-node1 bin]# unzip node.zip Archive: node.zipinflating: kubelet.sh inflating: proxy.sh [rootk8s-node1 bin]# chmod *.sh[rootk8s-node1 bin]# sh kubelet.sh 192.168.2.117 192.168.2.254Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.[rootk8s-node2 bin]# unzip node.zipArchive: node.zipinflating: kubelet.sh inflating: proxy.sh [rootk8s-node2 bin]# chmod *.sh[rootk8s-node2 bin]# sh kubelet.sh 192.168.2.118 192.168.2.254Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.6.3、部署 kube-proxy 执行以下命令部署 kube-proxy。在 k8s-node1、k8s-node2 主机上都要执行 [rootk8s-node1 bin]# sh proxy.sh 192.168.2.117Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.[rootk8s-node2 bin]# sh proxy.sh 192.168.2.118Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.6.4、查看 Node 节点组件是否安装成功 执行以下命令查看 Node 节点组件是否安装成功。 [rootk8s-node2 bin]# ps -ef | grep kuberoot 4859 1 1 14:51 ? 00:01:31 /opt/kubernetes/bin/etcd --nameetcd03 --data-dir/var/lib/etcd/default.etcd --listen-peer-urlshttps://192.168.2.118:2380 --listen-client-urlshttps://192.168.2.118:2379,http://127.0.0.1:2379 --advertise-client-urlshttps://192.168.2.118:2379 --initial-advertise-peer-urlshttps://192.168.2.118:2380 --initial-clusteretcd01https://192.168.2.116:2380,etcd02https://192.168.2.117:2380,etcd03https://192.168.2.118:2380 --initial-cluster-tokenetcd01https://192.168.2.116:2380,etcd02https://192.168.2.117:2380,etcd03https://192.168.2.118:2380 --initial-cluster-statenew --cert-file/opt/kubernetes/ssl/server.pem --key-file/opt/kubernetes/ssl/server-key.pem --peer-cert-file/opt/kubernetes/ssl/server.pem --peer-key-file/opt/kubernetes/ssl/server-key.pem --trusted-ca-file/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file/opt/kubernetes/ssl/ca.pem root 5190 1 0 15:59 ? 00:00:01 /opt/kubernetes/bin/flanneld --ip-masq root 9001 1 0 16:45 ? 00:00:00 /opt/kubernetes/bin/kubelet --logtostderrtrue --v4 --address192.168.2.118 --hostname-override192.168.2.118 --kubeconfig/opt/kubernetes/cfg/kubelet.kubeconfig --experimental-bootstrap-kubeconfig/opt/kubrnetes/cfg/bootstrap.kubeconfig --cert-dir/opt/kubernetes/ssl --cluster-dns192.168.2.254 --cluster-domaincluster.local --fail-swap-onfalse --pod-infra-container-imageregistry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 root 9236 1 0 16:47 ? 00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderrtrue --v4 --hostname-override192.168.2.118 --kubeconfig/opt/kubernetes/cfg/kube-proxy.kubeconfig root 9365 2753 0 16:48 pts/0 00:00:00 grep --colorauto kube6.5、查看自动签发证书 部署完组件后Master 节点即可获取到 Node 节点的请求证书然后允许加入集群即 可。 [rootk8s-master bin]# kubectl get csr //查看请求证书NAME AGE REQUESTOR CONDITION node-csr-8l5R966htJ1yECVdKq97-yDX25_KREynxrskUFs_ZIs 8m26s kubelet-bootstrap Pending node-csr-D9o_6AXRpMqRnLU2O0riqbpylNWZhZ6PD0aP6voiC_c 8m27s kubelet-bootstrap Pending node-csr-nTHbHBv3Wpsk5f1HuaaTEzw0OD6CK5okqnuwFid7rhk 5m21s kubelet-bootstrap Pending[rootk8s-master bin]# kubectl certificate approve node-csr-8l5R966htJ1yECVdKq97-yDX25_KREynxrskUFs_ZIs // 允许节点加入集群替换为自己的节点名 certificatesigningrequest.certificates.k8s.io/node-csr-8l5R966htJ1yECVdKq97-yDX25_KREynxrskUFs_ZIs approved[rootk8s-master bin]# kubectl certificate approve node-csr-D9o_6AXRpMqRnLU2O0riqbpylNWZhZ6PD0aP6voiC_ccertificatesigningrequest.certificates.k8s.io/node-csr-D9o_6AXRpMqRnLU2O0riqbpylNWZhZ6PD0aP6voiC_c approved[rootk8s-master bin]# kubectl certificate approve node-csr-nTHbHBv3Wpsk5f1HuaaTEzw0OD6CK5okqnuwFid7rhkcertificatesigningrequest.certificates.k8s.io/node-csr-nTHbHBv3Wpsk5f1HuaaTEzw0OD6CK5okqnuwFid7rhk approved[rootk8s-master bin]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.2.117 Ready none 2m41s v1.17.3 192.168.2.118 Ready none 39s v1.17.3七、以Deployment方式创建Nginx服务 创建deployment [rootk8s-master ~]# vim nginx-deployment.yamlapiVersion: apps/v1 kind: Deployment metadata:name: nginx-deploymentlabels:app: nginx spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.19.4ports:- containerPort: 80 创建nginx-deployment应用 [rootk8s-master ~]# kubectl create -f nginx-deployment.yaml deployment.apps/nginx-deployment created查看deployment详情 [rootk8s-master ~]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 3/3 3 3 4m49s查看具体某个pod的状态信息 [rootk8s-master ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-deployment-fc75999cc-f5lvg 1/1 Running 0 4m52s nginx-deployment-fc75999cc-fdpsm 1/1 Running 0 4m52s nginx-deployment-fc75999cc-rmblk 1/1 Running 0 4m52s [rootk8s-master ~]# kubectl describe deployment nginx-deployment Name: nginx-deployment Namespace: default CreationTimestamp: Fri, 18 Aug 2023 16:54:56 0800 Labels: appnginx Annotations: deployment.kubernetes.io/revision: 1 Selector: appnginx Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template:Labels: appnginxContainers:nginx:Image: nginx:1.19.4Port: 80/TCPHost Port: 0/TCPEnvironment: noneMounts: noneVolumes: none Conditions:Type Status Reason---- ------ ------Available True MinimumReplicasAvailableProgressing True NewReplicaSetAvailable OldReplicaSets: none NewReplicaSet: nginx-deployment-fc75999cc (3/3 replicas created) Events:Type Reason Age From Message---- ------ ---- ---- -------Normal ScalingReplicaSet 5m42s deployment-controller Scaled up replica set nginx-deployment-fc75999cc to 3查看pod在状态 [rootk8s-master ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-deployment-fc75999cc-f5lvg 1/1 Running 0 6m8s nginx-deployment-fc75999cc-fdpsm 1/1 Running 0 6m8s nginx-deployment-fc75999cc-rmblk 1/1 Running 0 6m8s查看具体某个pod的状态信息 [rootk8s-master ~]# kubectl describe pod nginx-deployment-fc75999cc-f5lvg Name: nginx-deployment-fc75999cc-f5lvg Namespace: default Node: 192.168.2.117/192.168.2.117 Start Time: Fri, 18 Aug 2023 16:54:56 0800 Labels: appnginxpod-template-hashfc75999cc Annotations: none Status: Running IP: 172.17.84.2 IPs:IP: 172.17.84.2 Controlled By: ReplicaSet/nginx-deployment-fc75999cc Containers:nginx:Container ID: docker://f36134e89b059ebeb214d8ebc0ed3625af9e2a4ba8aaf27542fe1f122e832cefImage: nginx:1.19.4Image ID: docker-pullable://nginxsha256:c3a1592d2b6d275bef4087573355827b200b00ffc2d9849890a4f3aa2128c4aePort: 80/TCPHost Port: 0/TCPState: RunningStarted: Fri, 18 Aug 2023 16:59:34 0800Ready: TrueRestart Count: 0Environment: noneMounts:/var/run/secrets/kubernetes.io/serviceaccount from default-token-frzl2 (ro) Conditions:Type StatusInitialized True Ready True ContainersReady True PodScheduled True Volumes:default-token-frzl2:Type: Secret (a volume populated by a Secret)SecretName: default-token-frzl2Optional: false QoS Class: BestEffort Node-Selectors: none Tolerations: none Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled unknown default-scheduler Successfully assigned default/nginx-deployment-fc75999cc-f5lvg to 192.168.2.117Warning Failed 4m25s kubelet, 192.168.2.117 Failed to pull image nginx:1.19.4: rpc error: code Unknown desc context canceledWarning Failed 4m25s kubelet, 192.168.2.117 Error: ErrImagePullNormal BackOff 4m25s kubelet, 192.168.2.117 Back-off pulling image nginx:1.19.4Warning Failed 4m25s kubelet, 192.168.2.117 Error: ImagePullBackOffNormal Pulling 4m14s (x2 over 6m47s) kubelet, 192.168.2.117 Pulling image nginx:1.19.4Normal Pulled 2m12s kubelet, 192.168.2.117 Successfully pulled image nginx:1.19.4Normal Created 2m12s kubelet, 192.168.2.117 Created container nginxNormal Started 2m12s kubelet, 192.168.2.117 Started container nginx[rootk8s-master ~]# kubectl get pod -o wide #创建成功状态为Running NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-fc75999cc-f5lvg 1/1 Running 0 7m30s 172.17.84.2 192.168.2.117 none none nginx-deployment-fc75999cc-fdpsm 1/1 Running 0 7m30s 172.17.34.2 192.168.2.118 none none nginx-deployment-fc75999cc-rmblk 1/1 Running 0 7m30s 172.17.84.3 192.168.2.117 none none测试Pod访问 [rootk8s-node1 bin]# elinks --dump http://172.17.84.3Welcome to nginx!If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.For online documentation and support please refer to [1]nginx.org.Commercial support is available at [2]nginx.com.Thank you for using nginx.ReferencesVisible links1. http://nginx.org/2. http://nginx.com/
http://www.dnsts.com.cn/news/55638.html

相关文章:

  • 可以做免费广告的网站有哪些wordpress 360 vr
  • 淄博高端网站设计wordpress怎么去掉主题的搜索框
  • 京东网站建设过程网站开发硬件
  • 手机影视网站建设手机大全及价格
  • 百川网站维护网站备案名称的影响
  • 做ppt高手_一定要常去这八个网站企业网站背景颜色
  • 娱乐网站排行榜生物类培养基网站建设 中企动力
  • 深圳网站制作公司嘉兴怎样加强公司网站建设
  • 库尔勒网站建设知乎网站开发用的语言
  • 高端营销型网站建设招聘网站开发学徒
  • 如何做网站逻辑结构图wordpress 获取菜单项
  • 北京做网站公司的排名百度推广登录官网
  • 关键词挖掘网站亳州网站开发
  • 求南浦做电商网站教育类app开发
  • 人力资源网站模板做网站公司排行榜
  • 常州微网站洛可可设计公司市值
  • 阿里云网站建设方案书是什么北京市朝阳区网站开发公司
  • 温州企业做网站天华建筑设计公司官网
  • 网站制作无锡什么是网络营销促销
  • 58同城做网站多少钱营销型网站建设规划书
  • 正规网站建设推荐内乡微网站建设
  • 网站开发 职位描述玉环做企业网站
  • 苏州房地产网站建设wordpress怎样删除主题
  • 哪个网站做的系统好seo的作用有哪些
  • 网站psd设计稿沈阳网站建设团队
  • 怎么看一个网站用什么平台做的铜川做网站的公司电话
  • 梅州免费建站找哪家做灯箱片的设计网站
  • seo网站平台网页设计找工作
  • 网站后台空间30g要多少钱铜川新区网站建设招标
  • 山西网站seo上百度推广 免费做网站