当前位置: 首页 > news >正文

广西网站建设公司招聘北京建设工程有限公司

广西网站建设公司招聘,北京建设工程有限公司,东莞房价最新消息,影响网站访问速度#x1f49d;#x1f49d;#x1f49d;欢迎来到我的博客#xff0c;很高兴能够在这里和您见面#xff01;希望您在这里可以感受到一份轻松愉快的氛围#xff0c;不仅可以获得有趣的内容和知识#xff0c;也可以畅所欲言、分享您的想法和见解。 推荐:Linux运维老纪的首页…   欢迎来到我的博客很高兴能够在这里和您见面希望您在这里可以感受到一份轻松愉快的氛围不仅可以获得有趣的内容和知识也可以畅所欲言、分享您的想法和见解。 推荐:Linux运维老纪的首页,持续学习,不断总结,共同进步,活到老学到老 导航剑指大厂系列:全面总结 运维核心技术:系统基础、数据库、网路技术、系统安全、自动化运维、容器技术、监控工具、脚本编程、云服务等。 常用运维工具系列:常用的运维开发工具, zabbix、nagios、docker、k8s、puppet、ansible等 数据库系列:详细总结了常用数据库 mysql、Redis、MongoDB、oracle 技术点,以及工作中遇到的 mysql 问题等 懒人运维系列:总结好用的命令,解放双手不香吗?能用一个命令完成绝不用两个操作 数据结构与算法系列:总结数据结构和算法,不同类型针对性训练,提升编程思维,剑指大厂 非常期待和您一起在这个小小的网络世界里共同探索、学习和成长。 ✨✨ 欢迎订阅本专栏 ✨✨ kubernetes存储之GlusterFS 1、glusterfs概述 1.1、glusterfs简介 glusterfs是一个开源的、可扩展的分布式文件系统集成来自多台服务器上的磁盘存储资源到单一全局命名空间以提供共享文件存储。设计用于大型、分布式的应用能够处理对大量数据的访问。‌ 它运行在廉价的普通硬件上提供了高可用性和容错功能确保了系统的稳定性和数据的可靠性。GFS由存储服务器、客户端以及NFS/Samba存储网关组成没有元数据服务器组件这有助于提升整个系统的性能、可靠性和稳定性。 1.2、glusterfs特点 可以扩展到几PB容量支持处理数千个客户端兼容POSIX接口使用通用硬件普通服务器即可构建能够使用支持扩展属性的文件系统例如ext4XFS支持工业标准的协议例如NFSSMB提供很多高级功能例如副本配额跨地域复制快照以及bitrot检测支持根据不同工作负载进行调优 1.3、glusterfs卷的模式 glusterfs中的volume的模式有很多中包括以下几种 分布卷默认模式即DHT, 也叫 分布卷: 将文件以hash算法随机分布到 一台服务器节点中存储。复制模式即AFR, 创建volume 时带 replica x 数量: 将文件复制到 replica x 个节点中。条带模式即Striped, 创建volume 时带 stripe x 数量 将文件切割成数据块分别存储到 stripe x 个节点中 ( 类似raid 0 )。分布式条带模式最少需要4台服务器才能创建。 创建volume 时 stripe 2 server 4 个节点 是DHT 与 Striped 的组合型。分布式复制模式最少需要4台服务器才能创建。 创建volume 时 replica 2 server 4 个节点是DHT 与 AFR 的组合型。条带复制卷模式最少需要4台服务器才能创建。 创建volume 时 stripe 2 replica 2 server 4 个节点 是 Striped 与 AFR 的组合型。三种模式混合 至少需要8台 服务器才能创建。 stripe 2 replica 2 , 每4个节点 组成一个 组。 2、heketi概述 heketi是一个提供RESTful API管理gfs卷的框架能够在kubernetes、openshift、openstack等云平台上实现动态的存储资源供应支持gfs多集群管理便于管理员对gfs进行操作在kubernetes集群中pod将存储的请求发送至heketi然后heketi控制gfs集群创建对应的存储卷。heketi动态在集群内选择bricks构建指定的volumes以确保副本会分散到集群不同的故障域内。heketi还支持任意数量的glusterfs集群以保证接入的云服务器不局限于单个glusterfs集群。 3、部署heketiglusterfs 环境kubeadm安装的最新k8s 1.16.2版本由1master2node组成网络插件选用的是flannel默认kubeadm安装的k8s会给master打上污点本文为了实现gfs集群功能先手动去掉了污点。 本文的glusterfs卷模式为复制卷模式。 另外glusterfs在kubernetes集群中需要以特权运行需要在kube-apiserver中添加–allow-privilegedtrue参数以开启此功能默认此版本的kubeadm已开启。 [rootk8s-master-01 ~]# kubectl describe nodes k8s-master-01 |grep Taint Taints: node-role.kubernetes.io/master:NoSchedule [rootk8s-master-01 ~]# kubectl taint node k8s-master-01 node-role.kubernetes.io/master- node/k8s-master-01 untainted [rootk8s-master-01 ~]# kubectl describe nodes k8s-master-01 |grep Taint Taints: none3.1、准备工作 为了保证pod能够正常使用gfs作为后端存储需要每台运行pod的节点上提前安装gfs的客户端工具其他存储方式也类似。 3.1.1、所有节点安装glusterfs客户端 $ yum install -y glusterfs glusterfs-fuse -y3.1.2、节点打标签 需要安装gfs的kubernetes设置Label因为gfs是通过kubernetes集群的DaemonSet方式安装的。DaemonSet安装方式默认会在每个节点上都进行安装除非安装前设置筛选要安装节点Label带上此标签的节点才会安装。 安装脚本中设置DaemonSet中设置安装在贴有 storagenodeglusterfs的节点所以这是事先将节点贴上对应Label。 [rootk8s-master-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master-01 Ready master 5d v1.16.2 k8s-node-01 Ready none 4d23h v1.16.2 k8s-node-02 Ready none 4d23h v1.16.2 [rootk8s-master-01 ~]# kubectl label node k8s-master-01 storagenodeglusterfs node/k8s-master-01 labeled [rootk8s-master-01 ~]# kubectl label node k8s-node-01 storagenodeglusterfs node/k8s-node-01 labeled [rootk8s-master-01 ~]# kubectl label node k8s-node-02 storagenodeglusterfs node/k8s-node-02 labeled [rootk8s-master-01 ~]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master-01 Ready master 5d v1.16.2 beta.kubernetes.io/archamd64,beta.kubernetes.io/oslinux,kubernetes.io/archamd64,kubernetes.io/hostnamek8s-master-01,kubernetes.io/oslinux,node-role.kubernetes.io/master,storagenodeglusterfs k8s-node-01 Ready none 4d23h v1.16.2 beta.kubernetes.io/archamd64,beta.kubernetes.io/oslinux,kubernetes.io/archamd64,kubernetes.io/hostnamek8s-node-01,kubernetes.io/oslinux,storagenodeglusterfs k8s-node-02 Ready none 4d23h v1.16.2 beta.kubernetes.io/archamd64,beta.kubernetes.io/oslinux,kubernetes.io/archamd64,kubernetes.io/hostnamek8s-node-02,kubernetes.io/oslinux,storagenodeglusterfs3.1.3、所有节点加载对应模块 $ modprobe dm_snapshot $ modprobe dm_mirror $ modprobe dm_thin_pool查看是否加载 $ lsmod | grep dm_snapshot $ lsmod | grep dm_mirror $ lsmod | grep dm_thin_pool3.2、创建glusterfs集群 采用容器化方式部署gfs集群同样也可以使用传统方式部署在生产环境中gfs集群最好是独立于集群之外进行部署之后只需要创建对应的endpoints即可。这里采用Daemonset方式部署同时保证已经打上标签的节点上都运行一个gfs服务并且均有提供存储的磁盘。 3.2.1、下载相关安装文件 [rootk8s-master-01 glusterfs]# pwd /root/manifests/glusterfs [rootk8s-master-01 glusterfs]# wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz [rootk8s-master-01 glusterfs]# tar xf heketi-client-v7.0.0.linux.amd64.tar.gz [rootk8s-master-01 glusterfs]# cd heketi-client/share/heketi/kubernetes/ [rootk8s-master-01 kubernetes]# pwd /root/manifests/glusterfs/heketi-client/share/heketi/kubernetes在本集群中下面用到的daemonset控制器及后面用到的deployment控制器的api版本均变为了apps/v1所以需要手动修改下载的json文件再进行部署资源编排文件中需要指定selector声明。避免出现以下报错 [rootk8s-master-01 kubernetes]# kubectl apply -f glusterfs-daemonset.json error: unable to recognize glusterfs-daemonset.json: no matches for kind DaemonSet in version extensions/v1beta1修改api版本 apiVersion: extensions/v1beta1为apps/v1 apiVersion: apps/v1,指定selector声明 [rootk8s-master-01 kubernetes]# kubectl apply -f glusterfs-daemonset.json error: error validating glusterfs-daemonset.json: error validating data: ValidationError(DaemonSet.spec): missing required field selector in io.k8s.api.apps.v1.DaemonSetSpec; if you choose to ignore these errors, turn validation off with --validatefalse对应后面内容的selector用matchlabel相关联 spec: {selector: {matchLabels: {glusterfs-node: daemonset}},3.2.2、创建集群 [rootk8s-master-01 kubernetes]# kubectl apply -f glusterfs-daemonset.json daemonset.apps/glusterfs created注意 这里使用的是默认的挂载方式可使用其他磁盘作为gfs的工作目录 此处创建的namespace为default可手动指定为其他namespace 3.2.3、查看gfs pods [rootk8s-master-01 kubernetes]# kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs-9tttf 1/1 Running 0 1m10s glusterfs-gnrnr 1/1 Running 0 1m10s glusterfs-v92j5 1/1 Running 0 1m10s3.3、创建heketi服务 3.3.1、创建heketi的service account对象 [rootk8s-master-01 kubernetes]# cat heketi-service-account.json {apiVersion: v1,kind: ServiceAccount,metadata: {name: heketi-service-account} } [rootk8s-master-01 kubernetes]# kubectl apply -f heketi-service-account.json serviceaccount/heketi-service-account created [rootk8s-master-01 kubernetes]# kubectl get sa NAME SECRETS AGE default 1 71m heketi-service-account 1 5s3.3.2、创建heketi对应的权限和secret [rootk8s-master-01 kubernetes]# kubectl create clusterrolebinding heketi-gluster-admin --clusterroleedit --serviceaccountdafault:heketi-service-account clusterrolebinding.rbac.authorization.k8s.io/heketi-gluster-admin created [rootk8s-master-01 kubernetes]# kubectl create secret generic heketi-config-secret --from-file./heketi.json secret/heketi-config-secret created3.3.3、初始化部署heketi 同样的需要修改api版本以及增加selector声明部分。 [rootk8s-master-01 kubernetes]# vim heketi-bootstrap.json ...kind: Deployment,apiVersion: apps/v1 ...spec: {selector: {matchLabels: {name: deploy-heketi }}, ... [rootk8s-master-01 kubernetes]# kubectl create -f heketi-bootstrap.json service/deploy-heketi created deployment.apps/deploy-heketi created [rootk8s-master-01 kubernetes]# vim heketi-deployment.json ...kind: Deployment,apiVersion: apps/v1, ...spec: {selector: {matchLabels: {name: heketi}},replicas: 1, ... [rootk8s-master-01 kubernetes]# kubectl apply -f heketi-deployment.json secret/heketi-db-backup created service/heketi created deployment.apps/heketi created [rootk8s-master-01 kubernetes]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-heketi-6c687b4b84-p7mcr 1/1 Running 0 72s heketi-68795ccd8-9726s 0/1 ContainerCreating 0 50s glusterfs-9tttf 1/1 Running 0 48m glusterfs-gnrnr 1/1 Running 0 48m glusterfs-v92j5 1/1 Running 0 48m3.4、创建gfs集群 3.4.1、复制二进制文件 复制heketi-cli到/usr/local/bin目录下 [rootk8s-master-01 heketi-client]# pwd /root/manifests/glusterfs/heketi-client [rootk8s-master-01 heketi-client]# cp bin/heketi-cli /usr/local/bin/ [rootk8s-master-01 heketi-client]# heketi-cli -v heketi-cli v7.0.03.4.2、配置topology-sample 修改topology-samplemanage为gfs管理服务的节点Node主机名storage为节点的ip地址device为节点上的裸设备也就是用于提供存储的磁盘最好使用裸设备不进行分区。 因此需要预先在每个gfs的节点上准备好新的磁盘这里分别在三个节点都新添加了一块/dev/sdb磁盘设备大小均为10G。 [rootk8s-master-01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 2G 0 part /boot └─sda2 8:2 0 48G 0 part ├─centos-root 253:0 0 44G 0 lvm /└─centos-swap 253:1 0 4G 0 lvm sdb 8:16 0 10G 0 disk sr0 11:0 1 1024M 0 rom [rootk8s-node-01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 2G 0 part /boot └─sda2 8:2 0 48G 0 part ├─centos-root 253:0 0 44G 0 lvm /└─centos-swap 253:1 0 4G 0 lvm sdb 8:16 0 10G 0 disk sr0 11:0 1 1024M 0 rom [rootk8s-node-02 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 2G 0 part /boot └─sda2 8:2 0 48G 0 part ├─centos-root 253:0 0 44G 0 lvm /└─centos-swap 253:1 0 4G 0 lvm sdb 8:16 0 10G 0 disk sr0 11:0 1 1024M 0 rom 配置topology-sample {clusters: [{nodes: [{node: {hostnames: {manage: [k8s-master-01],storage: [192.168.2.10]},zone: 1},devices: [{name: /dev/sdb,destroydata: false}]},{node: {hostnames: {manage: [k8s-node-01],storage: [192.168.2.11]},zone: 1},devices: [{name: /dev/sdb,destroydata: false}]},{node: {hostnames: {manage: [k8s-node-02],storage: [192.168.2.12]},zone: 1},devices: [{name: /dev/sdb,destroydata: false}]}]}] }3.4.3、获取当前heketi的ClusterIP 查看当前heketi的ClusterIP并通过环境变量声明 [rootk8s-master-01 kubernetes]# kubectl get svc|grep heketi deploy-heketi ClusterIP 10.1.241.99 none 8080/TCP 3m18s [rootk8s-master-01 kubernetes]# curl http://10.1.241.99:8080/hello Hello from Heketi [rootk8s-master-01 kubernetes]# export HEKETI_CLI_SERVERhttp://10.1.241.99:8080 [rootk8s-master-01 kubernetes]# echo $HEKETI_CLI_SERVER http://10.1.185.215:80803.4.4、使用heketi创建gfs集群 执行如下命令创建gfs集群会提示Invalid JWT token: Token missing iss claim [rootk8s-master-01 kubernetes]# heketi-cli topology load --jsontopology-sample.json Error: Unable to get topology information: Invalid JWT token: Token missing iss claim这是因为新版本的heketi在创建gfs集群时需要带上参数声明用户名及密码相应值在heketi.json文件中配置即 [rootk8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret My Secret topology load --jsontopology-sample.json Creating cluster ... ID: 1c5ffbd86847e5fc1562ef70c033292eAllowing file volumes on cluster.Allowing block volumes on cluster.Creating node k8s-master-01 ... ID: b6100a5af9b47d8c1f19be0b2b4d8276Adding device /dev/sdb ... OKCreating node k8s-node-01 ... ID: 04740cac8d42f56e354c94bdbb7b8e34Adding device /dev/sdb ... OKCreating node k8s-node-02 ... ID: 1b33ad0dba20eaf23b5e3a4845e7cdb4Adding device /dev/sdb ... OK执行了heketi-cli topology load之后Heketi在服务器做的大致操作如下 进入任意glusterfs Pod内执行gluster peer status 发现都已把对端加入到了可信存储池(TSP)中。在运行了gluster Pod的节点上自动创建了一个VG此VG正是由topology-sample.json 文件中的磁盘裸设备创建而来。一块磁盘设备创建出一个VG以后创建的PVC即从此VG里划分的LV。heketi-cli topology info 查看拓扑结构显示出每个磁盘设备的ID对应VG的ID总空间、已用空间、空余空间等信息。 通过部分日志查看 [rootk8s-master-01 manifests]# kubectl logs -f deploy-heketi-6c687b4b84-l5b6j ... [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [pvs -o pv_name,pv_uuid,vg_name --reportformatjson /dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [ {report: [{pv: [{pv_name:/dev/sdb, pv_uuid:1UkSIV-RYt1-QBNw-KyAR-Drm5-T9NG-UmO313, vg_name:vg_398329cc70361dfd4baa011d811de94a}]}]} ]: Stderr [ WARNING: Device /dev/sdb not initialized in udev database even after waiting 10000000 microseconds.WARNING: Device /dev/centos/root not initialized in udev database even after waiting 10000000 microseconds.WARNING: Device /dev/sda1 not initialized in udev database even after waiting 10000000 microseconds.WARNING: Device /dev/centos/swap not initialized in udev database even after waiting 10000000 microseconds.WARNING: Device /dev/sda2 not initialized in udev database even after waiting 10000000 microseconds.WARNING: Device /dev/sdb not initialized in udev database even after waiting 10000000 microseconds. ] [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [udevadm info --querysymlink --name/dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [udevadm info --querysymlink --name/dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [ ]: Stderr [] [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [vgdisplay -c vg_398329cc70361dfd4baa011d811de94a] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [negroni] 2019-10-23T02:17:44Z | 200 | 93.868µs | 10.1.241.99:8080 | GET /queue/3d0b6edb0faa67e8efd752397f314a6f [kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [vgdisplay -c vg_398329cc70361dfd4baa011d811de94a] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [ vg_398329cc70361dfd4baa011d811de94a:r/w:772:-1:0:0:0:-1:0:1:1:10350592:4096:2527:0:2527:YCPG9X-b270-1jf2-VwKX-ycpZ-OI9u-7ZidOc ]: Stderr [] [cmdexec] DEBUG 2019/10/23 02:17:44 heketi/executors/cmdexec/device.go:273:cmdexec.(*CmdExecutor).getVgSizeFromNode: /dev/sdb in k8s-node-01 has TotalSize:10350592, FreeSize:10350592, UsedSize:0 [heketi] INFO 2019/10/23 02:17:44 Added device /dev/sdb [asynchttp] INFO 2019/10/23 02:17:44 Completed job 3d0b6edb0faa67e8efd752397f314a6f in 3m2.694238221s [negroni] 2019-10-23T02:17:45Z | 204 | 105.23µs | 10.1.241.99:8080 | GET /queue/3d0b6edb0faa67e8efd752397f314a6f [cmdexec] INFO 2019/10/23 02:17:45 Check Glusterd service status in node k8s-node-01 [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered [heketi] INFO 2019/10/23 02:17:45 Adding node k8s-node-02 [negroni] 2019-10-23T02:17:45Z | 202 | 146.998544ms | 10.1.241.99:8080 | POST /nodes [asynchttp] INFO 2019/10/23 02:17:45 Started job 8da70b6fd6fec1d61c4ba1cd0fe27fe5 [cmdexec] INFO 2019/10/23 02:17:45 Probing: k8s-node-01 - 192.168.2.12 [negroni] 2019-10-23T02:17:45Z | 200 | 74.577µs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5 [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [gluster --modescript --timeout600 peer probe 192.168.2.12] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [negroni] 2019-10-23T02:17:46Z | 200 | 79.893µs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5 [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [gluster --modescript --timeout600 peer probe 192.168.2.12] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [peer probe: success. ]: Stderr [] [cmdexec] INFO 2019/10/23 02:17:46 Setting snapshot limit [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [gluster --modescript --timeout600 snapshot config snap-max-hard-limit 14] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [gluster --modescript --timeout600 snapshot config snap-max-hard-limit 14] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [snapshot config: snap-max-hard-limit for System set successfully ]: Stderr [] [heketi] INFO 2019/10/23 02:17:46 Added node 1b33ad0dba20eaf23b5e3a4845e7cdb4 [asynchttp] INFO 2019/10/23 02:17:46 Completed job 8da70b6fd6fec1d61c4ba1cd0fe27fe5 in 488.404011ms [negroni] 2019-10-23T02:17:46Z | 303 | 80.712µs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5 [negroni] 2019-10-23T02:17:46Z | 200 | 242.595µs | 10.1.241.99:8080 | GET /nodes/1b33ad0dba20eaf23b5e3a4845e7cdb4 [heketi] INFO 2019/10/23 02:17:46 Adding device /dev/sdb to node 1b33ad0dba20eaf23b5e3a4845e7cdb4 [negroni] 2019-10-23T02:17:46Z | 202 | 696.018µs | 10.1.241.99:8080 | POST /devices [asynchttp] INFO 2019/10/23 02:17:46 Started job 21af2069b74762a5521a46e2b52e7d6a [negroni] 2019-10-23T02:17:46Z | 200 | 82.354µs | 10.1.241.99:8080 | GET /queue/21af2069b74762a5521a46e2b52e7d6a [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [pvcreate -qq --metadatasize128M --dataalignment256K /dev/sdb] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 ...3.4.5、持久化heketi配置 上面创建的heketi没有配置持久化的卷如果heketi的pod重启可能会丢失之前的配置信息所以现在创建heketi持久化的卷来对heketi数据进行持久化该持久化方式利用gfs提供的动态存储也可以采用其他方式进行持久化。 在所有节点安装device-mapper* yum install -y device-mapper*将配置信息保存为文件并创建持久化相关信息 [rootk8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret My Secret setup-openshift-heketi-storage Saving heketi-storage.json Saving heketi-storage.json [rootk8s-master-01 kubernetes]# kubectl apply -f heketi-storage.json secret/heketi-storage-secret created endpoints/heketi-storage-endpoints created service/heketi-storage-endpoints created job.batch/heketi-storage-copy-job created删除中间产物 [rootk8s-master-01 kubernetes]# kubectl delete all,svc,jobs,deployment,secret --selectordeploy-heketi pod deploy-heketi-6c687b4b84-l5b6j deleted service deploy-heketi deleted deployment.apps deploy-heketi deleted replicaset.apps deploy-heketi-6c687b4b84 deleted job.batch heketi-storage-copy-job deleted secret heketi-storage-secret deleted创建持久化的heketi [rootk8s-master-01 kubernetes]# kubectl apply -f heketi-deployment.json secret/heketi-db-backup created service/heketi created deployment.apps/heketi created [rootk8s-master-01 kubernetes]# kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs-cqw5d 1/1 Running 0 41m glusterfs-l2lsv 1/1 Running 0 41m glusterfs-lrdz7 1/1 Running 0 41m heketi-68795ccd8-m8x55 1/1 Running 0 32s查看持久化后heketi的svc并重新声明环境变量 [rootk8s-master-01 kubernetes]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE heketi ClusterIP 10.1.45.61 none 8080/TCP 2m9s heketi-storage-endpoints ClusterIP 10.1.26.73 none 1/TCP 4m58s kubernetes ClusterIP 10.1.0.1 none 443/TCP 14h [rootk8s-master-01 kubernetes]# export HEKETI_CLI_SERVERhttp://10.1.45.61:8080 [rootk8s-master-01 kubernetes]# curl http://10.1.45.61:8080/hello Hello from Heketi查看gfs集群信息更多操作参照官方文档说明 [rootk8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret My Secret topology infoCluster Id: 1c5ffbd86847e5fc1562ef70c033292eFile: trueBlock: trueVolumes:Name: heketidbstorageSize: 2Id: b25f4b627cf66279bfe19e8a01e9e85dCluster Id: 1c5ffbd86847e5fc1562ef70c033292eMount: 192.168.2.11:heketidbstorageMount Options: backup-volfile-servers192.168.2.12,192.168.2.10Durability Type: replicateReplica: 3Snapshot: DisabledBricks:Id: 3ab6c19b8fe0112575ba04d58573a404Path: /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_3ab6c19b8fe0112575ba04d58573a404/brickSize (GiB): 2Node: b6100a5af9b47d8c1f19be0b2b4d8276Device: 703e3662cbd8ffb24a6401bb3c3c41faId: d1fa386f2ec9954f4517431163f67deaPath: /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_d1fa386f2ec9954f4517431163f67dea/brickSize (GiB): 2Node: 04740cac8d42f56e354c94bdbb7b8e34Device: 398329cc70361dfd4baa011d811de94aId: d2b0ae26fa3f0eafba407b637ca0d06bPath: /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_d2b0ae26fa3f0eafba407b637ca0d06b/brickSize (GiB): 2Node: 1b33ad0dba20eaf23b5e3a4845e7cdb4Device: 7c791bbb90f710123ba431a7cdde8d0bNodes:Node Id: 04740cac8d42f56e354c94bdbb7b8e34State: onlineCluster Id: 1c5ffbd86847e5fc1562ef70c033292eZone: 1Management Hostnames: k8s-node-01Storage Hostnames: 192.168.2.11Devices:Id:398329cc70361dfd4baa011d811de94a Name:/dev/sdb State:online Size (GiB):9 Used (GiB):2 Free (GiB):7 Bricks:Id:d1fa386f2ec9954f4517431163f67dea Size (GiB):2 Path: /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_d1fa386f2ec9954f4517431163f67dea/brickNode Id: 1b33ad0dba20eaf23b5e3a4845e7cdb4State: onlineCluster Id: 1c5ffbd86847e5fc1562ef70c033292eZone: 1Management Hostnames: k8s-node-02Storage Hostnames: 192.168.2.12Devices:Id:7c791bbb90f710123ba431a7cdde8d0b Name:/dev/sdb State:online Size (GiB):9 Used (GiB):2 Free (GiB):7 Bricks:Id:d2b0ae26fa3f0eafba407b637ca0d06b Size (GiB):2 Path: /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_d2b0ae26fa3f0eafba407b637ca0d06b/brickNode Id: b6100a5af9b47d8c1f19be0b2b4d8276State: onlineCluster Id: 1c5ffbd86847e5fc1562ef70c033292eZone: 1Management Hostnames: k8s-master-01Storage Hostnames: 192.168.2.10Devices:Id:703e3662cbd8ffb24a6401bb3c3c41fa Name:/dev/sdb State:online Size (GiB):9 Used (GiB):2 Free (GiB):7 Bricks:Id:3ab6c19b8fe0112575ba04d58573a404 Size (GiB):2 Path: /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_3ab6c19b8fe0112575ba04d58573a404/brick4、创建storageclass [rootk8s-master-01 kubernetes]# vim storageclass-gfs-heketi.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:name: gluster-heketi provisioner: kubernetes.io/glusterfs reclaimPolicy: Retain parameters:resturl: http://10.1.45.61:8080restauthenabled: truerestuser: adminrestuserkey: My SecretgidMin: 40000gidMax: 50000volumetype: replicate:3 allowVolumeExpansion: true [rootk8s-master-01 kubernetes]# kubectl apply -f storageclass-gfs-heketi.yaml storageclass.storage.k8s.io/gluster-heketi created参数说明 reclaimPolicyRetain 回收策略默认是Delete删除pvc后pv及后端创建的volume、brick(lvm)不会被删除。gidMin和gidMax能够使用的最小和最大gidvolumetype卷类型及个数这里使用的是复制卷个数必须大于1 5、测试通过gfs提供动态存储 创建一个pod使用动态pv在StorageClassName指定之前创建的StorageClass的name即gluster-heketi [rootk8s-master-01 kubernetes]# vim pod-use-pvc.yaml apiVersion: v1 kind: Pod metadata:name: pod-use-pvc spec:containers:- name: pod-use-pvcimage: busyboxcommand:- sleep- 3600volumeMounts:- name: gluster-volumemountPath: /pv-datareadOnly: falsevolumes:- name: gluster-volumepersistentVolumeClaim:claimName: pvc-gluster-heketi--- kind: PersistentVolumeClaim apiVersion: v1 metadata:name: pvc-gluster-heketi spec:accessModes: [ ReadWriteOnce ]storageClassName: gluster-heketiresources:requests:storage: 1Gi创建pod并查看创建的pv和pvc [rootk8s-master-01 kubernetes]# kubectl apply -f pod-use-pvc.yaml pod/pod-use-pvc created persistentvolumeclaim/pvc-gluster-heketi created [rootk8s-master-01 kubernetes]# kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-0fb9b246-4da4-491c-b6a2-4f38489ab11c 1Gi RWO Retain Bound default/pvc-gluster-heketi gluster-heketi 57sNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/pvc-gluster-heketi Bound pvc-0fb9b246-4da4-491c-b6a2-4f38489ab11c 1Gi RWO gluster-heketi 62s6、分析k8s通过heketi创建pv及pvc的过程 通过pvc及向storageclass申请创建对应的pv具体可通过查看创建的heketi pod的日志 首先发现heketi接收到请求之后运行了一个job任务创建了三个bricks在三个gfs节点中创建对应的目录 [heketi] INFO 2019/10/23 03:08:36 Allocating brick set #0 [negroni] 2019-10-23T03:08:36Z | 202 | 56.193603ms | 10.1.45.61:8080 | POST /volumes [asynchttp] INFO 2019/10/23 03:08:36 Started job 3ec932315085609bc54ead6e3f6851e8 [heketi] INFO 2019/10/23 03:08:36 Started async operation: Create Volume [heketi] INFO 2019/10/23 03:08:36 Trying Create Volume (attempt #1/5) [heketi] INFO 2019/10/23 03:08:36 Creating brick 289fe032c1f4f9f211480e24c5d74a44 [heketi] INFO 2019/10/23 03:08:36 Creating brick a3172661ba1b849d67b500c93c3dd652 [heketi] INFO 2019/10/23 03:08:36 Creating brick 917e27a9dbc5395ebf08dff8d3401b43 [negroni] 2019-10-23T03:08:36Z | 200 | 72.083µs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8 [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 1 [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_917e27a9dbc5395ebf08dff8d3401b43] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2创建lv添加自动挂载 [kubeexec] DEBUG 2019/10/23 03:08:37 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2 [kubeexec] DEBUG 2019/10/23 03:08:37 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [mkfs.xfs -i size512 -n size8192 /dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout [meta-data/dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652 isize512 agcount8, agsize32768 blks sectsz512 attr2, projid32bit1 crc1 finobt0, sparse0 data bsize4096 blocks262144, imaxpct25 sunit64 swidth64 blks naming version 2 bsize8192 ascii-ci0 ftype1 log internal log bsize4096 blocks2560, version2 sectsz512 sunit64 blks, lazy-count1 realtime none extsz4096 blocks0, rtextents0 ]: Stderr [] [kubeexec] DEBUG 2019/10/23 03:08:37 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [awk BEGIN {print \/dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652 xfs rw,inode64,noatime,nouuid 1 2\ \/var/lib/heketi/fstab\}] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]创建brick设置权限 [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [chmod 2775 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652/brick] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2 [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chown :40000 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout []: Stderr [] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [chmod 2775 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2 [negroni] 2019-10-23T03:08:38Z | 200 | 83.159µs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8 [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_917e27a9dbc5395ebf08dff8d3401b43/brick] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]: Stdout []: Stderr [] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652/brick] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout []: Stderr [] [kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout []: Stderr [] [cmdexec] INFO 2019/10/23 03:08:38 Creating volume vol_08e8447256de2598952dcb240e615d0f replica 3创建对应的volume [asynchttp] INFO 2019/10/23 03:08:41 Completed job 3ec932315085609bc54ead6e3f6851e8 in 5.007631648s [negroni] 2019-10-23T03:08:41Z | 303 | 78.335µs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8 [negroni] 2019-10-23T03:08:41Z | 200 | 5.751689ms | 10.1.45.61:8080 | GET /volumes/08e8447256de2598952dcb240e615d0f [negroni] 2019-10-23T03:08:41Z | 200 | 139.05µs | 10.1.45.61:8080 | GET /clusters/1c5ffbd86847e5fc1562ef70c033292e [negroni] 2019-10-23T03:08:41Z | 200 | 660.249µs | 10.1.45.61:8080 | GET /nodes/04740cac8d42f56e354c94bdbb7b8e34 [negroni] 2019-10-23T03:08:41Z | 200 | 270.334µs | 10.1.45.61:8080 | GET /nodes/1b33ad0dba20eaf23b5e3a4845e7cdb4 [negroni] 2019-10-23T03:08:41Z | 200 | 345.528µs | 10.1.45.61:8080 | GET /nodes/b6100a5af9b47d8c1f19be0b2b4d8276 [heketi] INFO 2019/10/23 03:09:39 Starting Node Health Status refresh [cmdexec] INFO 2019/10/23 03:09:39 Check Glusterd service status in node k8s-node-01 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered [heketi] INFO 2019/10/23 03:09:39 Periodic health check status: node 04740cac8d42f56e354c94bdbb7b8e34 uptrue [cmdexec] INFO 2019/10/23 03:09:39 Check Glusterd service status in node k8s-node-02 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]: Stdout filtered, Stderr filtered [heketi] INFO 2019/10/23 03:09:39 Periodic health check status: node 1b33ad0dba20eaf23b5e3a4845e7cdb4 uptrue [cmdexec] INFO 2019/10/23 03:09:39 Check Glusterd service status in node k8s-master-01 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)] [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0 [kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered [heketi] INFO 2019/10/23 03:09:39 Periodic health check status: node b6100a5af9b47d8c1f19be0b2b4d8276 uptrue [heketi] INFO 2019/10/23 03:09:39 Cleaned 0 nodes from health cache7、测试数据 测试使用该pv的pod之间能否共享数据手动进入到pod并创建文件 [rootk8s-master-01 kubernetes]# kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs-cqw5d 1/1 Running 0 90m glusterfs-l2lsv 1/1 Running 0 90m glusterfs-lrdz7 1/1 Running 0 90m heketi-68795ccd8-m8x55 1/1 Running 0 49m pod-use-pvc 1/1 Running 0 20m [rootk8s-master-01 kubernetes]# kubectl exec -it pod-use-pvc /bin/sh / # cd /pv-data/ /pv-data # echo hello worlda.txt /pv-data # cat a.txt hello world查看创建的卷 [rootk8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret My Secret volume list Id:08e8447256de2598952dcb240e615d0f Cluster:1c5ffbd86847e5fc1562ef70c033292e Name:vol_08e8447256de2598952dcb240e615d0f Id:b25f4b627cf66279bfe19e8a01e9e85d Cluster:1c5ffbd86847e5fc1562ef70c033292e Name:heketidbstorage将设备挂载查看卷中的数据vol_08e8447256de2598952dcb240e615d0f为卷名称 [rootk8s-master-01 kubernetes]# mount -t glusterfs 192.168.2.10:vol_08e8447256de2598952dcb240e615d0f /mnt [rootk8s-master-01 kubernetes]# ll /mnt/ total 1 -rw-r--r-- 1 root 40000 12 Oct 23 11:29 a.txt [rootk8s-master-01 kubernetes]# cat /mnt/a.txt hello world8、测试deployment 测试通过deployment控制器部署能否正常使用storageclass创建nginx的deployment [rootk8s-master-01 kubernetes]# vim nginx-deployment-gluster.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-gfs spec:selector:matchLabels:name: nginxreplicas: 2template:metadata:labels:name: nginxspec:containers:- name: nginximage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80volumeMounts:- name: nginx-gfs-htmlmountPath: /usr/share/nginx/html- name: nginx-gfs-confmountPath: /etc/nginx/conf.dvolumes:- name: nginx-gfs-htmlpersistentVolumeClaim:claimName: glusterfs-nginx-html- name: nginx-gfs-confpersistentVolumeClaim:claimName: glusterfs-nginx-conf--- apiVersion: v1 kind: PersistentVolumeClaim metadata:name: glusterfs-nginx-html spec:accessModes: [ ReadWriteMany ]storageClassName: gluster-heketiresources:requests:storage: 500Mi--- apiVersion: v1 kind: PersistentVolumeClaim metadata:name: glusterfs-nginx-conf spec:accessModes: [ ReadWriteMany ]storageClassName: gluster-heketiresources:requests:storage: 10Mi查看相应资源 [rootk8s-master-01 kubernetes]# kubectl get pod,pv,pvc|grep nginx pod/nginx-gfs-7d66cccf76-mkc76 1/1 Running 0 2m45s pod/nginx-gfs-7d66cccf76-zc8n2 1/1 Running 0 2m45s persistentvolume/pvc-87481e3a-9b7e-43aa-a0b9-4028ce0a1abb 1Gi RWX Retain Bound default/glusterfs-nginx-conf gluster-heketi 2m34s persistentvolume/pvc-f954a4ca-ea1c-458d-8490-a49a0a001ab5 1Gi RWX Retain Bound default/glusterfs-nginx-html gluster-heketi 2m34s persistentvolumeclaim/glusterfs-nginx-conf Bound pvc-87481e3a-9b7e-43aa-a0b9-4028ce0a1abb 1Gi RWX gluster-heketi 2m45s persistentvolumeclaim/glusterfs-nginx-html Bound pvc-f954a4ca-ea1c-458d-8490-a49a0a001ab5 1Gi RWX gluster-heketi 2m45s查看挂载情况 [rootk8s-master-01 kubernetes]# kubectl exec -it nginx-gfs-7d66cccf76-mkc76 -- df -Th Filesystem Type Size Used Avail Use% Mounted on overlay overlay 44G 3.2G 41G 8% / tmpfs tmpfs 64M 0 64M 0% /dev tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/mapper/centos-root xfs 44G 3.2G 41G 8% /etc/hosts shm tmpfs 64M 0 64M 0% /dev/shm 192.168.2.10:vol_adf6fc08c8828fdda27c8aa5ce99b50c fuse.glusterfs 1014M 43M 972M 5% /etc/nginx/conf.d 192.168.2.10:vol_454e14ae3184122ff9a14d77e02b10b9 fuse.glusterfs 1014M 43M 972M 5% /usr/share/nginx/html tmpfs tmpfs 2.0G 12K 2.0G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs tmpfs 2.0G 0 2.0G 0% /proc/acpi tmpfs tmpfs 2.0G 0 2.0G 0% /proc/scsi tmpfs tmpfs 2.0G 0 2.0G 0% /sys/firmware在宿主机挂载和创建文件 [rootk8s-master-01 kubernetes]# mount -t glusterfs 192.168.2.10:vol_454e14ae3184122ff9a14d77e02b10b9 /mnt/ [rootk8s-master-01 kubernetes]# cd /mnt/ [rootk8s-master-01 mnt]# echo hello worldindex.html [rootk8s-master-01 mnt]# kubectl exec -it nginx-gfs-7d66cccf76-mkc76 -- cat /usr/share/nginx/html/index.html hello world扩容nginx副本查看是否能正常挂载 [rootk8s-master-01 mnt]# kubectl scale deployment nginx-gfs --replicas3 deployment.apps/nginx-gfs scaled [rootk8s-master-01 mnt]# kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs-cqw5d 1/1 Running 0 129m glusterfs-l2lsv 1/1 Running 0 129m glusterfs-lrdz7 1/1 Running 0 129m heketi-68795ccd8-m8x55 1/1 Running 0 88m nginx-gfs-7d66cccf76-mkc76 1/1 Running 0 8m55s nginx-gfs-7d66cccf76-qzqnv 1/1 Running 0 23s nginx-gfs-7d66cccf76-zc8n2 1/1 Running 0 8m55s [rootk8s-master-01 mnt]# kubectl exec -it nginx-gfs-7d66cccf76-qzqnv -- cat /usr/share/nginx/html/index.html hello world至此在k8s集群中部署heketiglusterfs提供动态存储结束。
http://www.dnsts.com.cn/news/117075.html

相关文章:

  • 网站建设规划书河北jsp网站建设项目实战总结
  • 介绍一个做美食的网站wordpress自适应导航模板
  • 单职业传奇网站wordpress只允许vip可以推广
  • 建论坛网站网店设计师
  • 网站备案的要求是什么情况短视频推广哪家好
  • it网站开发公司WordPress哔哩哔哩主题
  • 建设机械网站精英ui设计师需要学的软件
  • 做网站的叫什么职位微信小程序应用开发
  • 师德师风建设好的小学网站电子商务网站体系结构有哪些
  • 在哪里建网站免费四川seo选哪家
  • 南通网站推广优化费用如何提高网站的用户粘度
  • 网站建设拓客有什么方法数据管理系统
  • 北京高端网站开发公司怎么做自媒体
  • 四川旅游seo整站优化站优化售后服务网点建设是指网站
  • 汉阳做网站多少钱一级a做爰片免费网站国语版的
  • 大连做网站大公司大连科技网站制作
  • 深圳品牌网站建设公司qq空间可以做网站吗
  • 大公司网站搭建公司百度怎么添加店铺地址
  • 单位网站维护 网站建设岗位深圳手机移动网站开发
  • 电子商务网站建设课程总结做网站及小程序需要会哪些技能
  • 白天做彩票维护的网站简述网站开发技术
  • 有什么做礼品的卖家网站企业年报信息公示系统
  • 高校资源网网站建设方案网站开发php还是jsp
  • 做微商做什么网站比较好宾爵手表价格官方网站
  • 做啊免费网站网站 方案
  • 做ui的图从哪个网站找什么样的网站可以做站群
  • 成都建工路桥建设有限公司网站网站开发 适应 手机 电脑
  • 怎么做一个购物网站海南住建部建设网站的网站
  • 网站设计案例公司网站帮助文档怎么写
  • 百度搜不到的网站在线制作头像文字图片