增城网站建设价格,外贸网站APP,网站建设常用编程语言,石家庄科技网站文章目录 Cephdeploy-ceph部署1.系统环境初始化1.1 修改主机名#xff0c;DNS解析1.2 时间同步1.3 配置apt基础源与ceph源1.4关闭selinux与防火墙1.5 **创建** ceph **集群部署用户** cephadmin1.6分发密钥 2. ceph部署2.1 **安装** ceph 部署工具2.2 **初始化** mon **节点**… 文章目录 Cephdeploy-ceph部署1.系统环境初始化1.1 修改主机名DNS解析1.2 时间同步1.3 配置apt基础源与ceph源1.4关闭selinux与防火墙1.5 **创建** ceph **集群部署用户** cephadmin1.6分发密钥 2. ceph部署2.1 **安装** ceph 部署工具2.2 **初始化** mon **节点**2.3 安装ceph-mon服务2.3.1 ceph-mon节点安装ceph-mon2.3.2 ceph 集群添加 ceph-mon 服务2.3.2 验证mon节点 2.4 分发admin 秘钥2.5 部署manager2.5.1 部署 ceph-mgr 节点2.5.2 验证ceph-mgr节点 2.6 部署osd2.6.1 初始化存储节点2.6.2 OSD与磁盘部署关系2.6.3 添加OSD2.6.4 验证ceph集群 2.7 测试上传与下载数据创建 pool上传数据列出数据文件信息下载文件修改文件删除文件 3. Ceph RBD 使用详解3.1 RBD架构图3.2 创建存储池3.3 创建img镜像3.3.1 创建镜像3.3.2 列出镜像详细信息3.3.3 :镜像的特性 3.4 客户端使用RBD3.4.1 客户端安装 ceph-common3.4.2 同步账户认证文件3.4.3 客户端映射镜像3.4.4 客户端挂载使用 4.CephFS使用详解4.1 部署MDS服务4.2 创建CephFS mdetadata和data存储池4.3 创建 cephFS 并验证4.4 创建cephFS客户端账户4.5 安装ceph客户端4.6 同步认证文件4.7 客户端安装ceph-common4.8 cephfs挂载使用4.8.1 内核空间挂载使用ceph-fs4.8.2 开机自动挂载 4.9用户空间挂载 ceph-fs4.9.1**安装** ceph-fuse4.9.2 ceph-fuse 挂载 ceph4.9.3 开机自动挂载 5.k8s使用ceph案例5.1 RBD静态存储5.1.1 使用pv/pvc挂载RBD5.1.2 直接使用pod挂载RBD 5.1 RBD动态存储类5.1.1 创建rbd pool5.1.2 创建 admin 用户 secret5.1.3 创建普通用户的 secret5.1.4 创建存储类5.1.5 创建基于存储类的PVC5.1.6 运行单机mysql pod验证 5.2 cephFS静态存储5.2.1 使用pv/pvc挂载cephFS5.2 直接使用pod挂载cephFS 5.4 cephFS动态存储类5.5 ceph-csi 动态存储Ceph-CSI RBDCeph-CSI CephFS 理解1.存储数据, object, pg,pgp, pool, osd, 存储磁盘的关系2.Filestore、BlueStore、journal理解 需求1.[删除OSD的正确方式](https://www.cnblogs.com/varden/p/16274669.html)2.[ceph 旧OSD 节点 格式化 数据加入新 ceph集群](https://www.cnblogs.com/breg/p/16517598.html)3.ceph如何修改配置文件 报错记录1. bash: python2: command not found2.[ceph_deploy][ERROR ] RuntimeError: AttributeError: module platform has no attribute linux_distribution3.apt-cache madison ceph-deploy 为1.5.38的低版本4. RuntimeError: Failed to execute command: /usr/sbin/ceph-volume lvm zap /dev/sdd5. 使用zap擦除格式化磁盘时报错[ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-volume lvm zap /dev/sdd6. mons are allowing insecure global_id reclaim7. 1 pool(s) do not have an application enabled8.clock skew detected on mon.ceph-master029. 低内核无法挂载ceph rbd块存储10. 1 filesystem is online with fewer MDS than max_mds Ceph
deploy-ceph部署
投入使用ceph前要知道一个很现实得问题ceph对低版本内核得客户端使用非常不友好低内核是指小于等于3.10.0-862默认的centos7.5及以下的系统都是小于此类内核无法正常使用ceph的文件存储cephFS块存储RBD。 ceph部署一定要考虑好版本问题经测试如果想使用ceph16版本那你的客户端操作系统内核小于3.10.0-862根本用不了常见的centos7.5以下默认没升级的内核都是小于3.10.0-862所以这一大批服务器使用ceph提供的rdb存储都会有问题而且ceph已经不提供centos的16版本的ceph-common组件也就是说ceph集群部署16版本常见的客户端centos7系统只能使用15版本的ceph-common虽说也可以使用但也存在一定隐患毕竟不是同一版本客户端软件目前推荐使用ceph15的最高版本15版本的安装与16相同只是ceph源不同。 以上说法不正确ceph版本选择和客户端内核没有关系是所有版本的ceph都不友好支持内核小于等于3.10.0-862CentOS7.5
环境
ubuntu 18.04b版本
ceph 16.10版本
主机名IP部署 内容ceph-master01public IP172.26.156.217 内部通讯IP 10.0.0.217mon,mgr,osd,ceph-deployceph-master02public IP172.26.156.218 内部通讯IP10.0.0.218mon,mgr,osdceph-master03public IP172.26.156.219 内部通讯IP10.0.0.219mon,mgr,osd
服务器配置服务器型号cpu内存磁盘大小及磁盘类型必须完全一致磁盘最好全ssd盘
1.系统环境初始化
1.1 修改主机名DNS解析
master01
hostnamectl set-hostname ceph-master01
vi /etc/hostname
ceph-master01master02
hostnamectl set-hostname ceph-master02
vi /etc/hostname
ceph-master02master03
hostnamectl set-hostname ceph-master03
vi /etc/hostname
ceph-master03vi /etc/hosts
10.0.0.217 ceph-master01.example.local ceph-master01
10.0.0.218 ceph-master02.example.local ceph-master02
10.0.0.219 ceph-master03.example.local ceph-master031.2 时间同步
所有服务器执行
#修改时区
timedatectl set-timezone Asia/Shanghai#时间同步
rootubuntu:~# apt install ntpdate
rootubuntu:~# ntpdate ntp.aliyun.com1 Sep 20:54:39 ntpdate[9120]: adjust time server 203.107.6.88 offset 0.003441 sec
rootubuntu:~# crontab -e
crontab: installing new crontab
rootubuntu:~# crontab -l
* * * * * ntpdate ntp.aliyun.com1.3 配置apt基础源与ceph源
所有服务器执行如下命令自动替换
#基础源
sed -i shttp://.*archive.ubuntu.comhttp://mirrors.tuna.tsinghua.edu.cng /etc/apt/sources.list
sed -i shttp://.*security.ubuntu.comhttp://mirrors.tuna.tsinghua.edu.cng /etc/apt/sources.list
#ceph源
echo deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic main /etc/apt/sources.list.d/ceph.list
echo deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-nautilus bionic main /etc/apt/sources.list.d/ceph.list#导入ceph源key不然不能使用ceph源
wget -q -O- https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc | sudo apt-key add -# ceph仓库为https的话需要安装下面不然无法使用https源
apt install -y apt-transport-https ca-certificates curl software-properties-commonapt update1.4关闭selinux与防火墙
# ufw disable1.5 创建 ceph 集群部署用户 cephadmin
推荐使用指定的普通用户部署和运行 ceph 集群普通用户只要能以非交互方式执行 sudo 命令执行一些特权命令即可新版的 ceph-deploy 可以指定包含 root 的在内只要可以执 行 sudo 命令的用户不过仍然推荐使用普通用户ceph 集群安装完成后会自动创建 ceph 用户(ceph 集群默认会使用 ceph 用户运行各服务进程,如 ceph-osd 等)因此推荐 使用除了 ceph 用户之外的比如 cephuser**、**cephadmin 这样的普通用户去部署和 管理ceph 集群。
在包含 ceph-deploy 节点的存储节点、mon 节点和 mgr 节点等创建 cephadmin 用户.
groupadd -r -g 2088 cephadmin useradd -r -m -s /bin/bash -u 2088 -g 2088 cephadmin echo cephadmin:chinadci888. | chpasswd各服务器允许 cephadmin 用户以 sudo 执行特权命令:
~# echo cephadmin ALL(ALL) NOPASSWD: ALL /etc/sudoers1.6分发密钥
deploy节点要与所有服务器monmgrosd节点免密本文这里只有三台服务器mon,mgr,osd都混合一起部署所以只免密了三台服务器
master01(deploy节点):
su - cephadmin
ssh-keygen
ssh-copy-id cephadminceph-master01
ssh-copy-id cephadminceph-master02
ssh-copy-id cephadminceph-master032. ceph部署
2.1 安装 ceph 部署工具
cephadminceph-master01:~$ apt-cache madison ceph-deploy
ceph-deploy | 2.0.1 | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-nautilus bionic/main amd64 Packages
ceph-deploy | 2.0.1 | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-nautilus bionic/main i386 Packages
ceph-deploy | 1.5.38-0ubuntu1 | http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 Packages
ceph-deploy | 1.5.38-0ubuntu1 | http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe i386 Packages
cephadminceph-master01:~$ sudo apt install ceph-deploy2.2 初始化 mon 节点
Ubuntu 各服务器需要单独安装 Python2(monmgrosd节点所有服务器必须做)
cephadminceph-master01:~$ sudo apt install python2.7 -y
cephadminceph-master01:~$ sudo ln -sv /usr/bin/python2.7 /usr/bin/python2ceph-master01:
ceph-deploy new --cluster-network 10.0.0.0/24 --public-network 172.26.0.0/16 ceph-master01 ceph-master02 ceph-master03
–cluster-network: 集群内部之间通讯网络
–public-network业务客户端使用网络单独使用网络规避
~$ mkdir /etc/ceph-cluster
~$ sudo chown cephadmin:cephadmin /etc/ceph-cluster
~$ cd /etc/ceph-cluster/
cephadminceph-master01:/etc/ceph-cluster$ ceph-deploy new --cluster-network 10.0.0.0/24 --public-network 172.26.0.0/16 ceph-master01 ceph-master02 ceph-master03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy new --cluster-network 10.0.0.0/24 --public-network 172.26.0.0/16 ceph-master01 ceph-master02 ceph-master03
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : ceph_deploy.conf.cephdeploy.Conf instance at 0x7efd0a772e10
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : [ceph-master01, ceph-master02, ceph-master03]
[ceph_deploy.cli][INFO ] func : function new at 0x7efd07a2bbd0
[ceph_deploy.cli][INFO ] public_network : 172.26.0.0/16
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : 10.0.0.0/24
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph-master01][DEBUG ] connection detected need for sudo
[ceph-master01][DEBUG ] connected to host: ceph-master01
[ceph-master01][DEBUG ] detect platform information from remote host
[ceph-master01][DEBUG ] detect machine type
[ceph-master01][DEBUG ] find the location of an executable
[ceph-master01][INFO ] Running command: sudo /bin/ip link show
[ceph-master01][INFO ] Running command: sudo /bin/ip addr show
[ceph-master01][DEBUG ] IP addresses found: [u172.26.156.217, u10.0.0.217]
[ceph_deploy.new][DEBUG ] Resolving host ceph-master01
[ceph_deploy.new][DEBUG ] Monitor ceph-master01 at 172.26.156.217
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph-master02][DEBUG ] connected to host: ceph-master01
[ceph-master02][INFO ] Running command: ssh -CT -o BatchModeyes ceph-master02
[ceph-master02][DEBUG ] connection detected need for sudo
[ceph-master02][DEBUG ] connected to host: ceph-master02
[ceph-master02][DEBUG ] detect platform information from remote host
[ceph-master02][DEBUG ] detect machine type
[ceph-master02][DEBUG ] find the location of an executable
[ceph-master02][INFO ] Running command: sudo /bin/ip link show
[ceph-master02][INFO ] Running command: sudo /bin/ip addr show
[ceph-master02][DEBUG ] IP addresses found: [u10.0.0.218, u172.26.156.218]
[ceph_deploy.new][DEBUG ] Resolving host ceph-master02
[ceph_deploy.new][DEBUG ] Monitor ceph-master02 at 172.26.156.218
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph-master03][DEBUG ] connected to host: ceph-master01
[ceph-master03][INFO ] Running command: ssh -CT -o BatchModeyes ceph-master03
[ceph-master03][DEBUG ] connection detected need for sudo
[ceph-master03][DEBUG ] connected to host: ceph-master03
[ceph-master03][DEBUG ] detect platform information from remote host
[ceph-master03][DEBUG ] detect machine type
[ceph-master03][DEBUG ] find the location of an executable
[ceph-master03][INFO ] Running command: sudo /bin/ip link show
[ceph-master03][INFO ] Running command: sudo /bin/ip addr show
[ceph-master03][DEBUG ] IP addresses found: [u172.26.156.219, u10.0.0.219]
[ceph_deploy.new][DEBUG ] Resolving host ceph-master03
[ceph_deploy.new][DEBUG ] Monitor ceph-master03 at 172.26.156.219
[ceph_deploy.new][DEBUG ] Monitor initial members are [ceph-master01, ceph-master02, ceph-master03]
[ceph_deploy.new][DEBUG ] Monitor addrs are [u172.26.156.217, u172.26.156.218, u172.26.156.219]
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...cephadminceph-master01:/etc/ceph-cluster$ ll
total 36
drwxr-xr-x 2 cephadmin cephadmin 4096 Sep 2 16:50 ./
drwxr-xr-x 91 root root 4096 Sep 2 16:22 ../
-rw-rw-r-- 1 cephadmin cephadmin 326 Sep 2 16:50 ceph.conf
-rw-rw-r-- 1 cephadmin cephadmin 17603 Sep 2 16:50 ceph-deploy-ceph.log
-rw------- 1 cephadmin cephadmin 73 Sep 2 16:50 ceph.mon.keyring此步骤必须执行否 ceph 集群的后续安装步骤会报错。
cephadminceph-master01:/etc/ceph-cluster$ ceph-deploy install --no-adjust-repos --nogpgcheck ceph-master01 ceph-master02 ceph-master03
--no-adjust-repos #不修改已有的 apt 仓库源(默认会使用官方仓库)
--nogpgcheck #不进行校验 cephadminceph-master01:/etc/ceph-cluster$ ceph-deploy install --no-adjust-repos --nogpgcheck ceph-master01 ceph-master02 ceph-master03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy install --no-adjust-repos --nogpgcheck ceph-master01 ceph-master02 ceph-master03
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : ceph_deploy.conf.cephdeploy.Conf instance at 0x7f59e4913e60
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : False
[ceph_deploy.cli][INFO ] func : function install at 0x7f59e51c5b50
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : [ceph-master01, ceph-master02, ceph-master03]
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : True
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version mimic on cluster ceph hosts ceph-master01 ceph-master02 ceph-master03
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-master01 ...
[ceph-master01][DEBUG ] connection detected need for sudo
[ceph-master01][DEBUG ] connected to host: ceph-master01
[ceph-master01][DEBUG ] detect platform information from remote host
[ceph-master01][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-master01][INFO ] installing Ceph on ceph-master01
[ceph-master01][INFO ] Running command: sudo env DEBIAN_FRONTENDnoninteractive DEBIAN_PRIORITYcritical apt-get --assume-yes -q update
[ceph-master01][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic InRelease
[ceph-master01][DEBUG ] Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-nautilus bionic InRelease
[ceph-master01][DEBUG ] Hit:3 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[ceph-master01][DEBUG ] Hit:4 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[ceph-master01][DEBUG ] Hit:5 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[ceph-master01][DEBUG ] Hit:6 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[ceph-master01][DEBUG ] Reading package lists...
[ceph-master01][INFO ] Running command: sudo env DEBIAN_FRONTENDnoninteractive DEBIAN_PRIORITYcritical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[ceph-master01][DEBUG ] Reading package lists...
[ceph-master01][DEBUG ] Building dependency tree...
[ceph-master01][DEBUG ] Reading state information...
[ceph-master01][DEBUG ] ca-certificates is already the newest version (20211016~18.04.1).
[ceph-master01][DEBUG ] apt-transport-https is already the newest version (1.6.14).
[ceph-master01][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 202 not upgraded.
[ceph-master01][INFO ] Running command: sudo env DEBIAN_FRONTENDnoninteractive DEBIAN_PRIORITYcritical apt-get --assume-yes -q update
[ceph-master01][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic InRelease
[ceph-master01][DEBUG ] Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-nautilus bionic InRelease
[ceph-master01][DEBUG ] Hit:3 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[ceph-master01][DEBUG ] Hit:4 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[ceph-master01][DEBUG ] Hit:5 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[ceph-master01][DEBUG ] Hit:6 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[ceph-master01][DEBUG ] Reading package lists...
[ceph-master01][INFO ] Running command: sudo env DEBIAN_FRONTENDnoninteractive DEBIAN_PRIORITYcritical apt-get --assume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
[ceph-master01][DEBUG ] Reading package lists...
[ceph-master01][DEBUG ] Building dependency tree...
[ceph-master01][DEBUG ] Reading state information...
[ceph-master01][DEBUG ] ceph is already the newest version (16.2.10-1bionic).
[ceph-master01][DEBUG ] ceph-mds is already the newest version (16.2.10-1bionic).
[ceph-master01][DEBUG ] ceph-mon is already the newest version (16.2.10-1bionic).
[ceph-master01][DEBUG ] ceph-osd is already the newest version (16.2.10-1bionic).
[ceph-master01][DEBUG ] radosgw is already the newest version (16.2.10-1bionic).
[ceph-master01][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 202 not upgraded.
[ceph-master01][INFO ] Running command: sudo ceph --version
[ceph-master01][DEBUG ] ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-master02 ...
[ceph-master02][DEBUG ] connection detected need for sudo
[ceph-master02][DEBUG ] connected to host: ceph-master02
[ceph-master02][DEBUG ] detect platform information from remote host
[ceph-master02][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-master02][INFO ] installing Ceph on ceph-master02
[ceph-master02][INFO ] Running command: sudo env DEBIAN_FRONTENDnoninteractive DEBIAN_PRIORITYcritical apt-get --assume-yes -q update
[ceph-master02][DEBUG ] Hit:1 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[ceph-master02][DEBUG ] Hit:2 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[ceph-master02][DEBUG ] Get:3 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic InRelease [8,572 B]
[ceph-master02][DEBUG ] Hit:4 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[ceph-master02][DEBUG ] Get:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-nautilus bionic InRelease [8,560 B]
[ceph-master02][DEBUG ] Hit:6 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[ceph-master02][DEBUG ] Fetched 17.1 kB in 1s (13.1 kB/s)
[ceph-master02][DEBUG ] Reading package lists...
[ceph-master02][INFO ] Running command: sudo env DEBIAN_FRONTENDnoninteractive DEBIAN_PRIORITYcritical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[ceph-master02][DEBUG ] Reading package lists...
[ceph-master02][DEBUG ] Building dependency tree...
[ceph-master02][DEBUG ] Reading state information...
[ceph-master02][DEBUG ] ca-certificates is already the newest version (20211016~18.04.1).
[ceph-master02][DEBUG ] apt-transport-https is already the newest version (1.6.14).
[ceph-master02][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 202 not upgraded.
[ceph-master02][INFO ] Running command: sudo env DEBIAN_FRONTENDnoninteractive DEBIAN_PRIORITYcritical apt-get --assume-yes -q update
[ceph-master02][DEBUG ] Hit:1 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[ceph-master02][DEBUG ] Hit:2 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[ceph-master02][DEBUG ] Hit:3 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[ceph-master02][DEBUG ] Hit:4 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[ceph-master02][DEBUG ] Get:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic InRelease [8,572 B]
[ceph-master02][DEBUG ] Get:6 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-nautilus bionic InRelease [8,560 B]
[ceph-master02][DEBUG ] Fetched 17.1 kB in 1s (12.5 kB/s)
[ceph-master02][DEBUG ] Reading package lists...
[ceph-master02][INFO ] Running command: sudo env DEBIAN_FRONTENDnoninteractive DEBIAN_PRIORITYcritical apt-get --assume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
[ceph-master02][DEBUG ] Reading package lists...
[ceph-master02][DEBUG ] Building dependency tree...
[ceph-master02][DEBUG ] Reading state information...
[ceph-master02][DEBUG ] ceph is already the newest version (16.2.10-1bionic).
[ceph-master02][DEBUG ] ceph-mds is already the newest version (16.2.10-1bionic).
[ceph-master02][DEBUG ] ceph-mon is already the newest version (16.2.10-1bionic).
[ceph-master02][DEBUG ] ceph-osd is already the newest version (16.2.10-1bionic).
[ceph-master02][DEBUG ] radosgw is already the newest version (16.2.10-1bionic).
[ceph-master02][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 202 not upgraded.
[ceph-master02][INFO ] Running command: sudo ceph --version
[ceph-master02][DEBUG ] ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-master03 ...
[ceph-master03][DEBUG ] connection detected need for sudo
[ceph-master03][DEBUG ] connected to host: ceph-master03
[ceph-master03][DEBUG ] detect platform information from remote host
[ceph-master03][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-master03][INFO ] installing Ceph on ceph-master03
[ceph-master03][INFO ] Running command: sudo env DEBIAN_FRONTENDnoninteractive DEBIAN_PRIORITYcritical apt-get --assume-yes -q update
[ceph-master03][DEBUG ] Hit:1 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[ceph-master03][DEBUG ] Hit:2 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[ceph-master03][DEBUG ] Hit:3 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[ceph-master03][DEBUG ] Hit:4 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[ceph-master03][DEBUG ] Get:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic InRelease [8,572 B]
[ceph-master03][DEBUG ] Get:6 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-nautilus bionic InRelease [8,560 B]
[ceph-master03][DEBUG ] Fetched 17.1 kB in 2s (8,636 B/s)
[ceph-master03][DEBUG ] Reading package lists...
[ceph-master03][INFO ] Running command: sudo env DEBIAN_FRONTENDnoninteractive DEBIAN_PRIORITYcritical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[ceph-master03][DEBUG ] Reading package lists...
[ceph-master03][DEBUG ] Building dependency tree...
[ceph-master03][DEBUG ] Reading state information...
[ceph-master03][DEBUG ] ca-certificates is already the newest version (20211016~18.04.1).
[ceph-master03][DEBUG ] apt-transport-https is already the newest version (1.6.14).
[ceph-master03][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 202 not upgraded.
[ceph-master03][INFO ] Running command: sudo env DEBIAN_FRONTENDnoninteractive DEBIAN_PRIORITYcritical apt-get --assume-yes -q update
[ceph-master03][DEBUG ] Hit:1 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[ceph-master03][DEBUG ] Hit:2 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[ceph-master03][DEBUG ] Hit:3 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[ceph-master03][DEBUG ] Hit:4 http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[ceph-master03][DEBUG ] Get:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic InRelease [8,572 B]
[ceph-master03][DEBUG ] Get:6 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-nautilus bionic InRelease [8,560 B]
[ceph-master03][DEBUG ] Fetched 17.1 kB in 1s (14.3 kB/s)
[ceph-master03][DEBUG ] Reading package lists...
[ceph-master03][INFO ] Running command: sudo env DEBIAN_FRONTENDnoninteractive DEBIAN_PRIORITYcritical apt-get --assume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
[ceph-master03][DEBUG ] Reading package lists...
[ceph-master03][DEBUG ] Building dependency tree...
[ceph-master03][DEBUG ] Reading state information...
[ceph-master03][DEBUG ] ceph is already the newest version (16.2.10-1bionic).
[ceph-master03][DEBUG ] ceph-mds is already the newest version (16.2.10-1bionic).
[ceph-master03][DEBUG ] ceph-mon is already the newest version (16.2.10-1bionic).
[ceph-master03][DEBUG ] ceph-osd is already the newest version (16.2.10-1bionic).
[ceph-master03][DEBUG ] radosgw is already the newest version (16.2.10-1bionic).
[ceph-master03][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 202 not upgraded.
[ceph-master03][INFO ] Running command: sudo ceph --version
[ceph-master03][DEBUG ] ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)此 过 程 会 在 指 定 的 ceph node 节 点 按 照 串 行 的 方 式 逐 个 服 务 器 安 装 ceph-base
ceph-common 等组件包 2.3 安装ceph-mon服务
2.3.1 ceph-mon节点安装ceph-mon
cephadminceph-master01:/etc/ceph-cluster# apt-cache madison ceph-monceph-mon | 16.2.10-1bionic | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 Packagesceph-mon | 14.2.22-1bionic | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-nautilus bionic/main amd64 Packagesceph-mon | 12.2.13-0ubuntu0.18.04.10 | http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 Packagesceph-mon | 12.2.13-0ubuntu0.18.04.10 | http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security/main amd64 Packagesceph-mon | 12.2.4-0ubuntu1 | http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 Packages
cephadminceph-master01:/etc/ceph-cluster$
rootceph-master01:~# apt install ceph-mon
rootceph-master02:~# apt install ceph-mon
rootceph-master03:~# apt install ceph-mon
#可能已经安装完毕 2.3.2 ceph 集群添加 ceph-mon 服务
cephadminceph-master01:/etc/ceph-cluster# pwd
/etc/ceph-cluster
cephadminceph-master01:/etc/ceph-cluster# cat ceph.conf
[global]
fsid f69afe6f-e559-4df7-998a-c5dc3e300209
public_network 172.26.0.0/16
cluster_network 10.0.0.0/24
mon_initial_members ceph-master01, ceph-master02, ceph-master03
mon_host 172.26.156.217,172.26.156.218,172.26.156.219 #通过配置文件将mon服务加入节点
auth_cluster_required cephx
auth_service_required cephx
auth_client_required cephx
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : ceph_deploy.conf.cephdeploy.Conf instance at 0x7fe450df12d0
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : function mon at 0x7fe450dcebd0
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-master01 ceph-master02 ceph-master03
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-master01 ...
[ceph-master01][DEBUG ] connection detected need for sudo
[ceph-master01][DEBUG ] connected to host: ceph-master01
[ceph-master01][DEBUG ] detect platform information from remote host
[ceph-master01][DEBUG ] detect machine type
[ceph-master01][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Ubuntu 18.04 bionic
[ceph-master01][DEBUG ] determining if provided host has same hostname in remote
[ceph-master01][DEBUG ] get remote short hostname
[ceph-master01][DEBUG ] deploying mon to ceph-master01
[ceph-master01][DEBUG ] get remote short hostname
[ceph-master01][DEBUG ] remote hostname: ceph-master01
[ceph-master01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-master01][DEBUG ] create the mon path if it does not exist
[ceph-master01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-master01/done
[ceph-master01][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-master01/done
[ceph-master01][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-master01.mon.keyring
[ceph-master01][DEBUG ] create the monitor keyring file
[ceph-master01][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph-master01 --keyring /var/lib/ceph/tmp/ceph-ceph-master01.mon.keyring --setuser 64045 --setgroup 64045
[ceph-master01][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-master01.mon.keyring
[ceph-master01][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-master01][DEBUG ] create the init path if it does not exist
[ceph-master01][INFO ] Running command: sudo systemctl enable ceph.target
[ceph-master01][INFO ] Running command: sudo systemctl enable ceph-monceph-master01
[ceph-master01][WARNIN] Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-monceph-master01.service → /lib/systemd/system/ceph-mon.service.
[ceph-master01][INFO ] Running command: sudo systemctl start ceph-monceph-master01
[ceph-master01][INFO ] Running command: sudo ceph --clusterceph --admin-daemon /var/run/ceph/ceph-mon.ceph-master01.asok mon_status
[ceph-master01][DEBUG ] ********************************************************************************
[ceph-master01][DEBUG ] status for monitor: mon.ceph-master01
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] election_epoch: 0,
[ceph-master01][DEBUG ] extra_probe_peers: [
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] addrvec: [
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] addr: 172.26.156.218:3300,
[ceph-master01][DEBUG ] nonce: 0,
[ceph-master01][DEBUG ] type: v2
[ceph-master01][DEBUG ] },
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] addr: 172.26.156.218:6789,
[ceph-master01][DEBUG ] nonce: 0,
[ceph-master01][DEBUG ] type: v1
[ceph-master01][DEBUG ] }
[ceph-master01][DEBUG ] ]
[ceph-master01][DEBUG ] },
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] addrvec: [
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] addr: 172.26.156.219:3300,
[ceph-master01][DEBUG ] nonce: 0,
[ceph-master01][DEBUG ] type: v2
[ceph-master01][DEBUG ] },
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] addr: 172.26.156.219:6789,
[ceph-master01][DEBUG ] nonce: 0,
[ceph-master01][DEBUG ] type: v1
[ceph-master01][DEBUG ] }
[ceph-master01][DEBUG ] ]
[ceph-master01][DEBUG ] }
[ceph-master01][DEBUG ] ],
[ceph-master01][DEBUG ] feature_map: {
[ceph-master01][DEBUG ] mon: [
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] features: 0x3f01cfb9fffdffff,
[ceph-master01][DEBUG ] num: 1,
[ceph-master01][DEBUG ] release: luminous
[ceph-master01][DEBUG ] }
[ceph-master01][DEBUG ] ]
[ceph-master01][DEBUG ] },
[ceph-master01][DEBUG ] features: {
[ceph-master01][DEBUG ] quorum_con: 0,
[ceph-master01][DEBUG ] quorum_mon: [],
[ceph-master01][DEBUG ] required_con: 0,
[ceph-master01][DEBUG ] required_mon: []
[ceph-master01][DEBUG ] },
[ceph-master01][DEBUG ] monmap: {
[ceph-master01][DEBUG ] created: 2022-09-05T02:52:15.915768Z,
[ceph-master01][DEBUG ] disallowed_leaders: : ,
[ceph-master01][DEBUG ] election_strategy: 1,
[ceph-master01][DEBUG ] epoch: 0,
[ceph-master01][DEBUG ] features: {
[ceph-master01][DEBUG ] optional: [],
[ceph-master01][DEBUG ] persistent: []
[ceph-master01][DEBUG ] },
[ceph-master01][DEBUG ] fsid: f69afe6f-e559-4df7-998a-c5dc3e300209,
[ceph-master01][DEBUG ] min_mon_release: 0,
[ceph-master01][DEBUG ] min_mon_release_name: unknown,
[ceph-master01][DEBUG ] modified: 2022-09-05T02:52:15.915768Z,
[ceph-master01][DEBUG ] mons: [
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] addr: 172.26.156.217:6789/0,
[ceph-master01][DEBUG ] crush_location: {},
[ceph-master01][DEBUG ] name: ceph-master01,
[ceph-master01][DEBUG ] priority: 0,
[ceph-master01][DEBUG ] public_addr: 172.26.156.217:6789/0,
[ceph-master01][DEBUG ] public_addrs: {
[ceph-master01][DEBUG ] addrvec: [
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] addr: 172.26.156.217:3300,
[ceph-master01][DEBUG ] nonce: 0,
[ceph-master01][DEBUG ] type: v2
[ceph-master01][DEBUG ] },
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] addr: 172.26.156.217:6789,
[ceph-master01][DEBUG ] nonce: 0,
[ceph-master01][DEBUG ] type: v1
[ceph-master01][DEBUG ] }
[ceph-master01][DEBUG ] ]
[ceph-master01][DEBUG ] },
[ceph-master01][DEBUG ] rank: 0,
[ceph-master01][DEBUG ] weight: 0
[ceph-master01][DEBUG ] },
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] addr: 0.0.0.0:0/1,
[ceph-master01][DEBUG ] crush_location: {},
[ceph-master01][DEBUG ] name: ceph-master02,
[ceph-master01][DEBUG ] priority: 0,
[ceph-master01][DEBUG ] public_addr: 0.0.0.0:0/1,
[ceph-master01][DEBUG ] public_addrs: {
[ceph-master01][DEBUG ] addrvec: [
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] addr: 0.0.0.0:0,
[ceph-master01][DEBUG ] nonce: 1,
[ceph-master01][DEBUG ] type: v1
[ceph-master01][DEBUG ] }
[ceph-master01][DEBUG ] ]
[ceph-master01][DEBUG ] },
[ceph-master01][DEBUG ] rank: 1,
[ceph-master01][DEBUG ] weight: 0
[ceph-master01][DEBUG ] },
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] addr: 0.0.0.0:0/2,
[ceph-master01][DEBUG ] crush_location: {},
[ceph-master01][DEBUG ] name: ceph-master03,
[ceph-master01][DEBUG ] priority: 0,
[ceph-master01][DEBUG ] public_addr: 0.0.0.0:0/2,
[ceph-master01][DEBUG ] public_addrs: {
[ceph-master01][DEBUG ] addrvec: [
[ceph-master01][DEBUG ] {
[ceph-master01][DEBUG ] addr: 0.0.0.0:0,
[ceph-master01][DEBUG ] nonce: 2,
[ceph-master01][DEBUG ] type: v1
[ceph-master01][DEBUG ] }
[ceph-master01][DEBUG ] ]
[ceph-master01][DEBUG ] },
[ceph-master01][DEBUG ] rank: 2,
[ceph-master01][DEBUG ] weight: 0
[ceph-master01][DEBUG ] }
[ceph-master01][DEBUG ] ],
[ceph-master01][DEBUG ] stretch_mode: false,
[ceph-master01][DEBUG ] tiebreaker_mon:
[ceph-master01][DEBUG ] },
[ceph-master01][DEBUG ] name: ceph-master01,
[ceph-master01][DEBUG ] outside_quorum: [
[ceph-master01][DEBUG ] ceph-master01
[ceph-master01][DEBUG ] ],
[ceph-master01][DEBUG ] quorum: [],
[ceph-master01][DEBUG ] rank: 0,
[ceph-master01][DEBUG ] state: probing,
[ceph-master01][DEBUG ] stretch_mode: false,
[ceph-master01][DEBUG ] sync_provider: []
[ceph-master01][DEBUG ] }
[ceph-master01][DEBUG ] ********************************************************************************
[ceph-master01][INFO ] monitor: mon.ceph-master01 is running
[ceph-master01][INFO ] Running command: sudo ceph --clusterceph --admin-daemon /var/run/ceph/ceph-mon.ceph-master01.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-master02 ...
[ceph-master02][DEBUG ] connection detected need for sudo
[ceph-master02][DEBUG ] connected to host: ceph-master02
[ceph-master02][DEBUG ] detect platform information from remote host
[ceph-master02][DEBUG ] detect machine type
[ceph-master02][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Ubuntu 18.04 bionic
[ceph-master02][DEBUG ] determining if provided host has same hostname in remote
[ceph-master02][DEBUG ] get remote short hostname
[ceph-master02][DEBUG ] deploying mon to ceph-master02
[ceph-master02][DEBUG ] get remote short hostname
[ceph-master02][DEBUG ] remote hostname: ceph-master02
[ceph-master02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-master02][DEBUG ] create the mon path if it does not exist
[ceph-master02][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-master02/done
[ceph-master02][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-master02/done
[ceph-master02][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-master02.mon.keyring
[ceph-master02][DEBUG ] create the monitor keyring file
[ceph-master02][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph-master02 --keyring /var/lib/ceph/tmp/ceph-ceph-master02.mon.keyring --setuser 64045 --setgroup 64045
[ceph-master02][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-master02.mon.keyring
[ceph-master02][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-master02][DEBUG ] create the init path if it does not exist
[ceph-master02][INFO ] Running command: sudo systemctl enable ceph.target
[ceph-master02][INFO ] Running command: sudo systemctl enable ceph-monceph-master02
[ceph-master02][WARNIN] Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-monceph-master02.service → /lib/systemd/system/ceph-mon.service.
[ceph-master02][INFO ] Running command: sudo systemctl start ceph-monceph-master02
[ceph-master02][INFO ] Running command: sudo ceph --clusterceph --admin-daemon /var/run/ceph/ceph-mon.ceph-master02.asok mon_status
[ceph-master02][DEBUG ] ********************************************************************************
[ceph-master02][DEBUG ] status for monitor: mon.ceph-master02
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] election_epoch: 1,
[ceph-master02][DEBUG ] extra_probe_peers: [
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addrvec: [
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addr: 172.26.156.217:3300,
[ceph-master02][DEBUG ] nonce: 0,
[ceph-master02][DEBUG ] type: v2
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addr: 172.26.156.217:6789,
[ceph-master02][DEBUG ] nonce: 0,
[ceph-master02][DEBUG ] type: v1
[ceph-master02][DEBUG ] }
[ceph-master02][DEBUG ] ]
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addrvec: [
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addr: 172.26.156.219:3300,
[ceph-master02][DEBUG ] nonce: 0,
[ceph-master02][DEBUG ] type: v2
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addr: 172.26.156.219:6789,
[ceph-master02][DEBUG ] nonce: 0,
[ceph-master02][DEBUG ] type: v1
[ceph-master02][DEBUG ] }
[ceph-master02][DEBUG ] ]
[ceph-master02][DEBUG ] }
[ceph-master02][DEBUG ] ],
[ceph-master02][DEBUG ] feature_map: {
[ceph-master02][DEBUG ] mon: [
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] features: 0x3f01cfb9fffdffff,
[ceph-master02][DEBUG ] num: 1,
[ceph-master02][DEBUG ] release: luminous
[ceph-master02][DEBUG ] }
[ceph-master02][DEBUG ] ]
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] features: {
[ceph-master02][DEBUG ] quorum_con: 0,
[ceph-master02][DEBUG ] quorum_mon: [],
[ceph-master02][DEBUG ] required_con: 0,
[ceph-master02][DEBUG ] required_mon: []
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] monmap: {
[ceph-master02][DEBUG ] created: 2022-09-05T02:52:20.691459Z,
[ceph-master02][DEBUG ] disallowed_leaders: : ,
[ceph-master02][DEBUG ] election_strategy: 1,
[ceph-master02][DEBUG ] epoch: 0,
[ceph-master02][DEBUG ] features: {
[ceph-master02][DEBUG ] optional: [],
[ceph-master02][DEBUG ] persistent: []
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] fsid: f69afe6f-e559-4df7-998a-c5dc3e300209,
[ceph-master02][DEBUG ] min_mon_release: 0,
[ceph-master02][DEBUG ] min_mon_release_name: unknown,
[ceph-master02][DEBUG ] modified: 2022-09-05T02:52:20.691459Z,
[ceph-master02][DEBUG ] mons: [
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addr: 172.26.156.217:6789/0,
[ceph-master02][DEBUG ] crush_location: {},
[ceph-master02][DEBUG ] name: ceph-master01,
[ceph-master02][DEBUG ] priority: 0,
[ceph-master02][DEBUG ] public_addr: 172.26.156.217:6789/0,
[ceph-master02][DEBUG ] public_addrs: {
[ceph-master02][DEBUG ] addrvec: [
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addr: 172.26.156.217:3300,
[ceph-master02][DEBUG ] nonce: 0,
[ceph-master02][DEBUG ] type: v2
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addr: 172.26.156.217:6789,
[ceph-master02][DEBUG ] nonce: 0,
[ceph-master02][DEBUG ] type: v1
[ceph-master02][DEBUG ] }
[ceph-master02][DEBUG ] ]
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] rank: 0,
[ceph-master02][DEBUG ] weight: 0
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addr: 172.26.156.218:6789/0,
[ceph-master02][DEBUG ] crush_location: {},
[ceph-master02][DEBUG ] name: ceph-master02,
[ceph-master02][DEBUG ] priority: 0,
[ceph-master02][DEBUG ] public_addr: 172.26.156.218:6789/0,
[ceph-master02][DEBUG ] public_addrs: {
[ceph-master02][DEBUG ] addrvec: [
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addr: 172.26.156.218:3300,
[ceph-master02][DEBUG ] nonce: 0,
[ceph-master02][DEBUG ] type: v2
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addr: 172.26.156.218:6789,
[ceph-master02][DEBUG ] nonce: 0,
[ceph-master02][DEBUG ] type: v1
[ceph-master02][DEBUG ] }
[ceph-master02][DEBUG ] ]
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] rank: 1,
[ceph-master02][DEBUG ] weight: 0
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addr: 0.0.0.0:0/2,
[ceph-master02][DEBUG ] crush_location: {},
[ceph-master02][DEBUG ] name: ceph-master03,
[ceph-master02][DEBUG ] priority: 0,
[ceph-master02][DEBUG ] public_addr: 0.0.0.0:0/2,
[ceph-master02][DEBUG ] public_addrs: {
[ceph-master02][DEBUG ] addrvec: [
[ceph-master02][DEBUG ] {
[ceph-master02][DEBUG ] addr: 0.0.0.0:0,
[ceph-master02][DEBUG ] nonce: 2,
[ceph-master02][DEBUG ] type: v1
[ceph-master02][DEBUG ] }
[ceph-master02][DEBUG ] ]
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] rank: 2,
[ceph-master02][DEBUG ] weight: 0
[ceph-master02][DEBUG ] }
[ceph-master02][DEBUG ] ],
[ceph-master02][DEBUG ] stretch_mode: false,
[ceph-master02][DEBUG ] tiebreaker_mon:
[ceph-master02][DEBUG ] },
[ceph-master02][DEBUG ] name: ceph-master02,
[ceph-master02][DEBUG ] outside_quorum: [],
[ceph-master02][DEBUG ] quorum: [],
[ceph-master02][DEBUG ] rank: 1,
[ceph-master02][DEBUG ] state: electing,
[ceph-master02][DEBUG ] stretch_mode: false,
[ceph-master02][DEBUG ] sync_provider: []
[ceph-master02][DEBUG ] }
[ceph-master02][DEBUG ] ********************************************************************************
[ceph-master02][INFO ] monitor: mon.ceph-master02 is running
[ceph-master02][INFO ] Running command: sudo ceph --clusterceph --admin-daemon /var/run/ceph/ceph-mon.ceph-master02.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-master03 ...
[ceph-master03][DEBUG ] connection detected need for sudo
[ceph-master03][DEBUG ] connected to host: ceph-master03
[ceph-master03][DEBUG ] detect platform information from remote host
[ceph-master03][DEBUG ] detect machine type
[ceph-master03][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Ubuntu 18.04 bionic
[ceph-master03][DEBUG ] determining if provided host has same hostname in remote
[ceph-master03][DEBUG ] get remote short hostname
[ceph-master03][DEBUG ] deploying mon to ceph-master03
[ceph-master03][DEBUG ] get remote short hostname
[ceph-master03][DEBUG ] remote hostname: ceph-master03
[ceph-master03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-master03][DEBUG ] create the mon path if it does not exist
[ceph-master03][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-master03/done
[ceph-master03][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-master03/done
[ceph-master03][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-master03.mon.keyring
[ceph-master03][DEBUG ] create the monitor keyring file
[ceph-master03][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph-master03 --keyring /var/lib/ceph/tmp/ceph-ceph-master03.mon.keyring --setuser 64045 --setgroup 64045
[ceph-master03][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-master03.mon.keyring
[ceph-master03][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-master03][DEBUG ] create the init path if it does not exist
[ceph-master03][INFO ] Running command: sudo systemctl enable ceph.target
[ceph-master03][INFO ] Running command: sudo systemctl enable ceph-monceph-master03
[ceph-master03][WARNIN] Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-monceph-master03.service → /lib/systemd/system/ceph-mon.service.
[ceph-master03][INFO ] Running command: sudo systemctl start ceph-monceph-master03
[ceph-master03][INFO ] Running command: sudo ceph --clusterceph --admin-daemon /var/run/ceph/ceph-mon.ceph-master03.asok mon_status
[ceph-master03][DEBUG ] ********************************************************************************
[ceph-master03][DEBUG ] status for monitor: mon.ceph-master03
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] election_epoch: 0,
[ceph-master03][DEBUG ] extra_probe_peers: [
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] addrvec: [
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] addr: 172.26.156.217:3300,
[ceph-master03][DEBUG ] nonce: 0,
[ceph-master03][DEBUG ] type: v2
[ceph-master03][DEBUG ] },
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] addr: 172.26.156.217:6789,
[ceph-master03][DEBUG ] nonce: 0,
[ceph-master03][DEBUG ] type: v1
[ceph-master03][DEBUG ] }
[ceph-master03][DEBUG ] ]
[ceph-master03][DEBUG ] },
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] addrvec: [
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] addr: 172.26.156.218:3300,
[ceph-master03][DEBUG ] nonce: 0,
[ceph-master03][DEBUG ] type: v2
[ceph-master03][DEBUG ] },
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] addr: 172.26.156.218:6789,
[ceph-master03][DEBUG ] nonce: 0,
[ceph-master03][DEBUG ] type: v1
[ceph-master03][DEBUG ] }
[ceph-master03][DEBUG ] ]
[ceph-master03][DEBUG ] }
[ceph-master03][DEBUG ] ],
[ceph-master03][DEBUG ] feature_map: {
[ceph-master03][DEBUG ] mon: [
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] features: 0x3f01cfb9fffdffff,
[ceph-master03][DEBUG ] num: 1,
[ceph-master03][DEBUG ] release: luminous
[ceph-master03][DEBUG ] }
[ceph-master03][DEBUG ] ]
[ceph-master03][DEBUG ] },
[ceph-master03][DEBUG ] features: {
[ceph-master03][DEBUG ] quorum_con: 0,
[ceph-master03][DEBUG ] quorum_mon: [],
[ceph-master03][DEBUG ] required_con: 0,
[ceph-master03][DEBUG ] required_mon: []
[ceph-master03][DEBUG ] },
[ceph-master03][DEBUG ] monmap: {
[ceph-master03][DEBUG ] created: 2022-09-05T02:52:25.483539Z,
[ceph-master03][DEBUG ] disallowed_leaders: : ,
[ceph-master03][DEBUG ] election_strategy: 1,
[ceph-master03][DEBUG ] epoch: 0,
[ceph-master03][DEBUG ] features: {
[ceph-master03][DEBUG ] optional: [],
[ceph-master03][DEBUG ] persistent: []
[ceph-master03][DEBUG ] },
[ceph-master03][DEBUG ] fsid: f69afe6f-e559-4df7-998a-c5dc3e300209,
[ceph-master03][DEBUG ] min_mon_release: 0,
[ceph-master03][DEBUG ] min_mon_release_name: unknown,
[ceph-master03][DEBUG ] modified: 2022-09-05T02:52:25.483539Z,
[ceph-master03][DEBUG ] mons: [
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] addr: 172.26.156.219:6789/0,
[ceph-master03][DEBUG ] crush_location: {},
[ceph-master03][DEBUG ] name: ceph-master03,
[ceph-master03][DEBUG ] priority: 0,
[ceph-master03][DEBUG ] public_addr: 172.26.156.219:6789/0,
[ceph-master03][DEBUG ] public_addrs: {
[ceph-master03][DEBUG ] addrvec: [
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] addr: 172.26.156.219:3300,
[ceph-master03][DEBUG ] nonce: 0,
[ceph-master03][DEBUG ] type: v2
[ceph-master03][DEBUG ] },
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] addr: 172.26.156.219:6789,
[ceph-master03][DEBUG ] nonce: 0,
[ceph-master03][DEBUG ] type: v1
[ceph-master03][DEBUG ] }
[ceph-master03][DEBUG ] ]
[ceph-master03][DEBUG ] },
[ceph-master03][DEBUG ] rank: 0,
[ceph-master03][DEBUG ] weight: 0
[ceph-master03][DEBUG ] },
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] addr: 0.0.0.0:0/1,
[ceph-master03][DEBUG ] crush_location: {},
[ceph-master03][DEBUG ] name: ceph-master01,
[ceph-master03][DEBUG ] priority: 0,
[ceph-master03][DEBUG ] public_addr: 0.0.0.0:0/1,
[ceph-master03][DEBUG ] public_addrs: {
[ceph-master03][DEBUG ] addrvec: [
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] addr: 0.0.0.0:0,
[ceph-master03][DEBUG ] nonce: 1,
[ceph-master03][DEBUG ] type: v1
[ceph-master03][DEBUG ] }
[ceph-master03][DEBUG ] ]
[ceph-master03][DEBUG ] },
[ceph-master03][DEBUG ] rank: 1,
[ceph-master03][DEBUG ] weight: 0
[ceph-master03][DEBUG ] },
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] addr: 0.0.0.0:0/2,
[ceph-master03][DEBUG ] crush_location: {},
[ceph-master03][DEBUG ] name: ceph-master02,
[ceph-master03][DEBUG ] priority: 0,
[ceph-master03][DEBUG ] public_addr: 0.0.0.0:0/2,
[ceph-master03][DEBUG ] public_addrs: {
[ceph-master03][DEBUG ] addrvec: [
[ceph-master03][DEBUG ] {
[ceph-master03][DEBUG ] addr: 0.0.0.0:0,
[ceph-master03][DEBUG ] nonce: 2,
[ceph-master03][DEBUG ] type: v1
[ceph-master03][DEBUG ] }
[ceph-master03][DEBUG ] ]
[ceph-master03][DEBUG ] },
[ceph-master03][DEBUG ] rank: 2,
[ceph-master03][DEBUG ] weight: 0
[ceph-master03][DEBUG ] }
[ceph-master03][DEBUG ] ],
[ceph-master03][DEBUG ] stretch_mode: false,
[ceph-master03][DEBUG ] tiebreaker_mon:
[ceph-master03][DEBUG ] },
[ceph-master03][DEBUG ] name: ceph-master03,
[ceph-master03][DEBUG ] outside_quorum: [
[ceph-master03][DEBUG ] ceph-master03
[ceph-master03][DEBUG ] ],
[ceph-master03][DEBUG ] quorum: [],
[ceph-master03][DEBUG ] rank: 0,
[ceph-master03][DEBUG ] state: probing,
[ceph-master03][DEBUG ] stretch_mode: false,
[ceph-master03][DEBUG ] sync_provider: []
[ceph-master03][DEBUG ] }
[ceph-master03][DEBUG ] ********************************************************************************
[ceph-master03][INFO ] monitor: mon.ceph-master03 is running
[ceph-master03][INFO ] Running command: sudo ceph --clusterceph --admin-daemon /var/run/ceph/ceph-mon.ceph-master03.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.ceph-master01
[ceph-master01][DEBUG ] connection detected need for sudo
[ceph-master01][DEBUG ] connected to host: ceph-master01
[ceph-master01][DEBUG ] detect platform information from remote host
[ceph-master01][DEBUG ] detect machine type
[ceph-master01][DEBUG ] find the location of an executable
[ceph-master01][INFO ] Running command: sudo ceph --clusterceph --admin-daemon /var/run/ceph/ceph-mon.ceph-master01.asok mon_status
[ceph_deploy.mon][WARNIN] mon.ceph-master01 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
[ceph-master01][INFO ] Running command: sudo ceph --clusterceph --admin-daemon /var/run/ceph/ceph-mon.ceph-master01.asok mon_status
[ceph_deploy.mon][WARNIN] mon.ceph-master01 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[ceph-master01][INFO ] Running command: sudo ceph --clusterceph --admin-daemon /var/run/ceph/ceph-mon.ceph-master01.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph-master01 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.ceph-master02
[ceph-master02][DEBUG ] connection detected need for sudo
[ceph-master02][DEBUG ] connected to host: ceph-master02
[ceph-master02][DEBUG ] detect platform information from remote host
[ceph-master02][DEBUG ] detect machine type
[ceph-master02][DEBUG ] find the location of an executable
[ceph-master02][INFO ] Running command: sudo ceph --clusterceph --admin-daemon /var/run/ceph/ceph-mon.ceph-master02.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph-master02 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.ceph-master03
[ceph-master03][DEBUG ] connection detected need for sudo
[ceph-master03][DEBUG ] connected to host: ceph-master03
[ceph-master03][DEBUG ] detect platform information from remote host
[ceph-master03][DEBUG ] detect machine type
[ceph-master03][DEBUG ] find the location of an executable
[ceph-master03][INFO ] Running command: sudo ceph --clusterceph --admin-daemon /var/run/ceph/ceph-mon.ceph-master03.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph-master03 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpP6crY0
[ceph-master01][DEBUG ] connection detected need for sudo
[ceph-master01][DEBUG ] connected to host: ceph-master01
[ceph-master01][DEBUG ] detect platform information from remote host
[ceph-master01][DEBUG ] detect machine type
[ceph-master01][DEBUG ] get remote short hostname
[ceph-master01][DEBUG ] fetch remote file
[ceph-master01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout25 --clusterceph --admin-daemon/var/run/ceph/ceph-mon.ceph-master01.asok mon_status
[ceph-master01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout25 --clusterceph --name mon. --keyring/var/lib/ceph/mon/ceph-ceph-master01/keyring auth get client.admin
[ceph-master01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout25 --clusterceph --name mon. --keyring/var/lib/ceph/mon/ceph-ceph-master01/keyring auth get client.bootstrap-mds
[ceph-master01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout25 --clusterceph --name mon. --keyring/var/lib/ceph/mon/ceph-ceph-master01/keyring auth get client.bootstrap-mgr
[ceph-master01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout25 --clusterceph --name mon. --keyring/var/lib/ceph/mon/ceph-ceph-master01/keyring auth get client.bootstrap-osd
[ceph-master01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout25 --clusterceph --name mon. --keyring/var/lib/ceph/mon/ceph-ceph-master01/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring ceph.mon.keyring already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpP6crY02.3.2 验证mon节点
验证在 mon 定节点已经自动安装并启动了 ceph-mon 服务ceph-mon服务的作用之一就是验证权限会在ceph-deploy 节点初始化目录会生成 ceph.bootstrap-mds/mgr/osd/rgw 服务的 keyring 认证文件这
些初始化文件拥有对 ceph 集群的最高权限所以一定要保存好后续需要发送给各个服务节点。
cephadminceph-master01:/etc/ceph-cluster# ps -ef | grep ceph-mon
ceph 28179 1 0 10:52 ? 00:00:05 /usr/bin/ceph-mon -f --cluster ceph --id ceph-master01 --setuser ceph --setgroup ceph
cephadm 28519 28038 0 11:10 pts/0 00:00:00 grep --colorauto ceph-mon
cephadminceph-master01:/etc/ceph-cluster# systemctl status ceph-mon.target
● ceph-mon.target - ceph target allowing to start/stop all ceph-mon.service instances at onceLoaded: loaded (/lib/systemd/system/ceph-mon.target; enabled; vendor preset: enabled)Active: active since Mon 2022-09-05 09:46:11 CST; 1h 24min ago
cephadminceph-master01:/etc/ceph-cluster# ll
total 248
drwxr-xr-x 2 cephadmin cephadmin 4096 Sep 5 10:52 ./
drwxr-xr-x 92 root root 4096 Sep 5 09:46 ../
-rw------- 1 cephadmin cephadmin 113 Sep 5 10:52 ceph.bootstrap-mds.keyring
-rw------- 1 cephadmin cephadmin 113 Sep 5 10:52 ceph.bootstrap-mgr.keyring
-rw------- 1 cephadmin cephadmin 113 Sep 5 10:52 ceph.bootstrap-osd.keyring
-rw------- 1 cephadmin cephadmin 113 Sep 5 10:52 ceph.bootstrap-rgw.keyring
-rw------- 1 cephadmin cephadmin 151 Sep 5 10:52 ceph.client.admin.keyring
-rw-rw-r-- 1 cephadmin cephadmin 326 Sep 2 16:50 ceph.conf
-rw-rw-r-- 1 cephadmin cephadmin 209993 Sep 5 10:52 ceph-deploy-ceph.log
-rw------- 1 cephadmin cephadmin 73 Sep 2 16:50 ceph.mon.keyring 执行ceph -s 发现有健康告警 在其中一个mon节点执行
ceph config set mon auth_allow_insecure_global_id_reclaim false2.4 分发admin 秘钥
在 ceph-deploy 节点把配置文件和 admin 密钥拷贝至 Ceph 集群需要执行 ceph 管理命令的节点从而不需要后期通过 ceph 命令对 ceph 集群进行管理配置的时候每次都需要指定ceph-mon 节点地址和 ceph.client.admin.keyring 文件,另外各 ceph-mon 节点也需要同步ceph 的集群配置文件与认证文件。
cephadminceph-master01:~# sudo apt install ceph-common -y #node 节点在初始化时已经安装发送admin密钥到deploy节点默认分发到/etc/ceph/下 ceph.client.admin.keyring只需要存放在要执行ceph客户端命令下即可同k8s kubeconfig文件传到日常管理的ceph-deploy下
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy admin ceph-master01 一般情况下ceph.client.admin.keyring文件的权限为600属主和属组为root如果在集群内节点使用cephadmin用户直接直接ceph命令将会提示无法找到/etc/ceph/ceph.client.admin.keyring文件因为权限不足
cephadminceph-master01:~# sudo setfacl -m u:cephadmin:rw /etc/ceph/ceph.client.admin.keyring
cephadminceph-master02:~# sudo setfacl -m u:cephadmin:rw /etc/ceph/ceph.client.admin.keyring
cephadminceph-master03:~# sudo setfacl -m u:cephadmin:rw /etc/ceph/ceph.client.admin.keyring2.5 部署manager
ceph 的 Luminious12 及以上版本有 manager 节点早期的版本没有。
2.5.1 部署 ceph-mgr 节点
因为此节点是monitor节点所有的ceph包已经安装了如果mgr节点与monitor节点不是一台服务器就会安装
cephadminceph-master01:~# sudo apt install ceph-mgr
Reading package lists... Done
Building dependency tree
Reading state information... Done
ceph-mgr is already the newest version (16.2.10-1bionic).
0 upgraded, 0 newly installed, 0 to remove and 202 not upgraded.cephadminceph-master02:~# sudo apt install ceph-mgr
Reading package lists... Done
Building dependency tree
Reading state information... Done
ceph-mgr is already the newest version (16.2.10-1bionic).
0 upgraded, 0 newly installed, 0 to remove and 202 not upgraded.cephadminceph-master03:~# sudo apt install ceph-mgr
Reading package lists... Done
Building dependency tree
Reading state information... Done
ceph-mgr is already the newest version (16.2.10-1bionic).
0 upgraded, 0 newly installed, 0 to remove and 202 not upgraded.创建mgr节点
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy mgr create ceph-master01 ceph-master02 ceph-master03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph-master01 ceph-master02 ceph-master03
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] mgr : [(ceph-master01, ceph-master01), (ceph-master02, ceph-master02), (ceph-master03, ceph-master03)]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : ceph_deploy.conf.cephdeploy.Conf instance at 0x7f97e641fe60
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : function mgr at 0x7f97e687f250
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-master01:ceph-master01 ceph-master02:ceph-master02 ceph-master03:ceph-master03
[ceph-master01][DEBUG ] connection detected need for sudo
[ceph-master01][DEBUG ] connected to host: ceph-master01
[ceph-master01][DEBUG ] detect platform information from remote host
[ceph-master01][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-master01
[ceph-master01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-master01][WARNIN] mgr keyring does not exist yet, creating one
[ceph-master01][DEBUG ] create a keyring file
[ceph-master01][DEBUG ] create path recursively if it doesnt exist
[ceph-master01][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-master01 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-master01/keyring
[ceph-master01][INFO ] Running command: sudo systemctl enable ceph-mgrceph-master01
[ceph-master01][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgrceph-master01.service → /lib/systemd/system/ceph-mgr.service.
[ceph-master01][INFO ] Running command: sudo systemctl start ceph-mgrceph-master01
[ceph-master01][INFO ] Running command: sudo systemctl enable ceph.target
[ceph-master02][DEBUG ] connection detected need for sudo
[ceph-master02][DEBUG ] connected to host: ceph-master02
[ceph-master02][DEBUG ] detect platform information from remote host
[ceph-master02][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-master02
[ceph-master02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-master02][WARNIN] mgr keyring does not exist yet, creating one
[ceph-master02][DEBUG ] create a keyring file
[ceph-master02][DEBUG ] create path recursively if it doesnt exist
[ceph-master02][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-master02 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-master02/keyring
[ceph-master02][INFO ] Running command: sudo systemctl enable ceph-mgrceph-master02
[ceph-master02][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgrceph-master02.service → /lib/systemd/system/ceph-mgr.service.
[ceph-master02][INFO ] Running command: sudo systemctl start ceph-mgrceph-master02
[ceph-master02][INFO ] Running command: sudo systemctl enable ceph.target
[ceph-master03][DEBUG ] connection detected need for sudo
[ceph-master03][DEBUG ] connected to host: ceph-master03
[ceph-master03][DEBUG ] detect platform information from remote host
[ceph-master03][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-master03
[ceph-master03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-master03][WARNIN] mgr keyring does not exist yet, creating one
[ceph-master03][DEBUG ] create a keyring file
[ceph-master03][DEBUG ] create path recursively if it doesnt exist
[ceph-master03][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-master03 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-master03/keyring
[ceph-master03][INFO ] Running command: sudo systemctl enable ceph-mgrceph-master03
[ceph-master03][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgrceph-master03.service → /lib/systemd/system/ceph-mgr.service.
[ceph-master03][INFO ] Running command: sudo systemctl start ceph-mgrceph-master03
[ceph-master03][INFO ] Running command: sudo systemctl enable ceph.target2.5.2 验证ceph-mgr节点
cephadminceph-master01:/etc/ceph-cluster# ps -ef | grep ceph-mgr
cephadminceph-master01:/etc/ceph-cluster# systemctl status ceph-mgrceph-master01cephadminceph-master02:/etc/ceph-cluster# ps -ef | grep ceph-mgr
cephadminceph-master02:/etc/ceph-cluster# systemctl status ceph-mgrceph-master02cephadminceph-master03:/etc/ceph-cluster# ps -ef | grep ceph-mgr
cephadminceph-master03:/etc/ceph-cluster# systemctl status ceph-mgrceph-master032.6 部署osd
2.6.1 初始化存储节点
deploy节点操作安装指定版本的ceph包本文这里由于node节点与master节点部署在一起已经安装过了新node节点接入可以执行
cephadminceph-master01:~# ceph-deploy install --release pacific ceph-master01 ceph-master02 ceph-master03列出 ceph node 节点各个磁盘
cephadminceph-master01:~# ceph-deploy disk list ceph-master01 ceph-master02 ceph-master03
#也可以使用fdisk -l 查看node节点所有未分区使用的磁盘使用 ceph-deploy disk zap 擦除各 ceph node 的 ceph 数据磁盘
ceph-master01 ceph-master02 ceph-master03的存储节点磁盘擦除过程如下可以反复擦除执行
ceph-deploy disk zap ceph-master01 /dev/sdb
ceph-deploy disk zap ceph-master01 /dev/sdc
ceph-deploy disk zap ceph-master01 /dev/sdd
ceph-deploy disk zap ceph-master02 /dev/sdb
ceph-deploy disk zap ceph-master02 /dev/sdc
ceph-deploy disk zap ceph-master02 /dev/sdd
ceph-deploy disk zap ceph-master03 /dev/sdb
ceph-deploy disk zap ceph-master03 /dev/sdc
ceph-deploy disk zap ceph-master03 /dev/sdd2.6.2 OSD与磁盘部署关系 #服务器上有两块ssd盘时可以分别把block-dbblock-wal存放到ssd盘中
ceph-deploy osd create {node} --data /dev/sdc --block-db /dev/sda --block-wal /dev/sdb
#服务器上只有一块硬盘时只指定db的话存放ssd盘没有指定waf存放位置waf也会自动写到更快速的ssd盘上和db共用
ceph-deploy osd create {node} --data /path/to/data --block-db /dev/sda
#第三种无意义
ceph-deploy osd create {node} --data /path/to/data --block-wal /dev/sda
这里采用最简单的第一种方案 单块磁盘高性能的ceph集群可以使用第二种方案ssd存放元数据与waf日志
2.6.3 添加OSD
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy osd create ceph-master01 --data /dev/sdb
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy osd create ceph-master01 --data /dev/sdc
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy osd create ceph-master01 --data /dev/sdd
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy osd create ceph-master02 --data /dev/sdb
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy osd create ceph-master02 --data /dev/sdc
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy osd create ceph-master02 --data /dev/sdd
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy osd create ceph-master03 --data /dev/sdb
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy osd create ceph-master03 --data /dev/sdc
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy osd create ceph-master03 --data /dev/sdd2.6.4 验证ceph集群
cephadminceph-master01:/etc/ceph-cluster# ceph -s cluster:id: f69afe6f-e559-4df7-998a-c5dc3e300209health: HEALTH_OKservices:mon: 3 daemons, quorum ceph-master01,ceph-master02,ceph-master03 (age 31m)mgr: ceph-master03(active, since 27h), standbys: ceph-master01, ceph-master02osd: 9 osds: 9 up (since 27h), 9 in (since 28h)data:pools: 2 pools, 33 pgsobjects: 1 objects, 100 MiBusage: 370 MiB used, 450 GiB / 450 GiB availpgs: 33 activeclean2.7 测试上传与下载数据
存取数据时客户端必须首先连接至 RADOS 集群上某存储池然后根据对象名称由相关的CRUSH 规则完成数据对象寻址。于是为了测试集群的数据存取功能这里首先创建一个用于测试的存储池 mypool并设定其 PG 数量为 32 个。
$ ceph -h #一个更底层的客户端命令
$ rados -h #客户端命令创建 pool
cephadminceph-master01:~# ceph osd pool create mypool 32 32
pool mypool createdcephadminceph-master01:/etc/ceph-cluster# sudo ceph osd pool ls
device_health_metrics
mypool
或者
cephadminceph-master01:/etc/ceph-cluster# rados lspools mypool
device_health_metrics
mypool
或者
cephadminceph-master01:/etc/ceph-cluster# ceph osd lspools
1 device_health_metrics
2 mypool上传数据
当前的 ceph 环境还没还没有部署使用块存储和文件系统使用 ceph也没有使用对象存储的客户端但是 ceph 的 rados 命令可以实现访问 ceph 对象存储的功能
cephadminceph-master01:~# sudo rados put msg1 /var/log/syslog --poolmypool列出数据
cephadminceph-master01:/etc/ceph-cluster# rados ls --poolmypool
msg1文件信息
cephadminceph-master01:/etc/ceph-cluster# ceph osd map mypool msg1
osdmap e114 pool mypool (2) object msg1 - pg 2.c833d430 (2.10) - up ([15,13,0], p15) acting ([15,13,0], p15)表示文件放在了存储池 id 为 2 的 c833d430 的 PG 上,10 为当前 PG 的 id, 2.10 表示数据是在 id 为 2 的存储池当中 id 为 10 的 PG 中存储在线的 OSD 编号 15,13,10主 OSD 为 5活动的 OSD 15,13,10三个 OSD 表示数据放一共 3 个副本PG 中的 OSD 是 ceph 的 crush算法计算出三份数据保存在哪些 OSD。
下载文件
cephadminceph-master01:/etc/ceph-cluster# sudo rados get msg1 --poolmypool /opt/my.txt
cephadminceph-master01:/etc/ceph-cluster# ll /opt/my.txt
-rw-r--r-- 1 root root 155733 Sep 7 20:51 /opt/my.txt
cephadminceph-master01:/etc/ceph-cluster# head /opt/my.txt
Sep 7 06:25:06 ceph-master01 rsyslogd: [origin softwarersyslogd swVersion8.32.0 x-pid998 x-infohttp://www.rsyslog.com] rsyslogd was HUPed
Sep 7 06:26:01 ceph-master01 CRON[10792]: (root) CMD (ntpdate ntp.aliyun.com)
Sep 7 06:26:01 ceph-master01 CRON[10791]: (CRON) info (No MTA installed, discarding output)
Sep 7 06:27:01 ceph-master01 CRON[10794]: (root) CMD (ntpdate ntp.aliyun.com)
Sep 7 06:27:01 ceph-master01 CRON[10793]: (CRON) info (No MTA installed, discarding output)
Sep 7 06:28:01 ceph-master01 CRON[10797]: (root) CMD (ntpdate ntp.aliyun.com)
Sep 7 06:28:01 ceph-master01 CRON[10796]: (CRON) info (No MTA installed, discarding output)
Sep 7 06:29:01 ceph-master01 CRON[10799]: (root) CMD (ntpdate ntp.aliyun.com)
Sep 7 06:29:01 ceph-master01 CRON[10798]: (CRON) info (No MTA installed, discarding output)
Sep 7 06:30:01 ceph-master01 CRON[10801]: (root) CMD (ntpdate ntp.aliyun.com)修改文件
修改文件只能下载后修改再上传覆盖
cephadminceph-master01:/etc/ceph-cluster# sudo rados put msg1 /etc/passwd --poolmypoo删除文件
cephadminceph-master01:/etc/ceph-cluster# sudo rados rm msg1 --poolmypool
cephadminceph-master01:/etc/ceph-cluster# rados ls --poolmypool3. Ceph RBD 使用详解
3.1 RBD架构图
Ceph 可以同时提供 RADOSGW(对象存储网关)、RBD(块存储)、Ceph FS(文件系统存储) RBD 即 RADOS Block Device 的简称RBD 块存储是常用的存储类型之一RBD 块设备类似磁盘可以被挂载RBD 块设备具有快照、多副本、克隆和一致性等特性数据以条带化的方式存储在 Ceph 集群的多个 OSD 中。
条带化技术就是一种自动的将 I/O 的负载均衡到多个物理磁盘上的技术条带化技术就是 将一块连续的数据分成很多小部分并把他们分别存储到不同磁盘上去。这就能使多个进程同 时访问数据的多个不同部分而不会造成磁盘冲突而且在需要对这种数据进行顺序访问的时 候可以获得最大程度上的 I/O 并行能力从而获得非常好的性能。3.2 创建存储池
#创建存储池
rootceph-master01:~# ceph osd pool create rbd-data1 32 32
pool rbd-data1 created
#存储池启用 rbd
rootceph-master01:~# ceph osd pool application enable rbd-data1 rbd
enabled application rbd on pool rbd-data1
#初始化 rbd
rootceph-master01:~# rbd pool init -p rbd-data13.3 创建img镜像
rbd 存储池并不能直接用于块设备而是需要事先在其中按需创建映像image并把映像文件作为块设备使用。rbd 命令可用于创建、查看及删除块设备相在的映像image)以及克隆映像、创建快照、将映像回滚到快照和查看快照等管理操作。例如下面的命令能 够在指定的 RBD 即 rbd-data1 创建一个名为 myimg1 的映像.
3.3.1 创建镜像
rootceph-master01:~# rbd create data-img1 --size 3G --pool rbd-data1 --image-format 2 --image-feature layering
#列出镜像
rootceph-master01:~# rbd ls --pool rbd-data1 -l
NAME SIZE PARENT FMT PROT LOCK
data-img1 3 GiB 2 3.3.2 列出镜像详细信息
rootceph-master01:~# rbd --image data-img1 --pool rbd-data1 info
rbd image data-img1:size 3 GiB in 768 objects order 22 (4 MiB objects) #3G 768个objects每个objects为4M snapshot_count: 0 id: 284d64e8f879d # 镜像idblock_name_prefix: rbd_data.284d64e8f879dformat: 2 features: layering #镜像特性op_features: flags: create_timestamp: Fri Sep 16 20:34:47 2022 access_timestamp: Fri Sep 16 20:34:47 2022modify_timestamp: Fri Sep 16 20:34:47 2022#已json显示详细信息
rootceph-master01:~# rbd ls --pool rbd-data1 -l --format json --pretty-format
[{image: data-img1,id: 284d64e8f879d,size: 3221225472,format: 2}
]3.3.3 :镜像的特性
RBD默认开启的特性包括: layering/exlcusive lock/object map/fast diff/deep flatten
#启用指定存储池中的指定镜像的特性
$ rbd feature enable exclusive-lock --pool rbd-data1 --image data-img1
$ rbd feature enable object-map --pool rbd-data1 --image data-img1
$ rbd feature enable fast-diff --pool rbd-data1 --image data-img1
#关闭指定存储池中的指定镜像的特性
$ rbd feature disable fast-diff --pool rbd-data1 --image data-img1
#验证镜像特性
$ rbd --image data-img1 --pool rbd-data1 info3.4 客户端使用RBD
客户端使用RBD需要两个条件
一.安装ceph 客户端组件 ceph-common
二.ceph用户
3.4.1 客户端安装 ceph-common
客户端要想挂载使用 ceph RBD需要安装 ceph 客户端组件 ceph-common但是 ceph-common 不在 cenos 的 yum 仓库因此需要单独配置 yum 源并且centos只能安装最高的版本为Octopus版15版本
#配置 yum 源
$ yum install epel-release
$ yum install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y
#下载ceph-common
$ yum install -y ceph-common
#验证ceph-common
[rootzd_spring_156_101 ~]# rpm -qa | grep ceph-common
python3-ceph-common-15.2.17-0.el7.x86_64
ceph-common-15.2.17-0.el7.x86_643.4.2 同步账户认证文件
#scp至客户端服务器的/etc/ceph目录下客户端默认会读取
[cephadminceph-deploy ceph-cluster]$ scp ceph.conf ceph.client.admin.keyring root172.26.156.17:/etc/ceph/3.4.3 客户端映射镜像
#映射rbd
[rootxianchaonode1 ~]# rbd -p rbd-data1 map data-img1
/dev/rbd0#客户端验证映射镜像
[rootxianchaonode1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
rbd0 253:0 0 3G 0 disk
sr0 11:0 1 4.2G 0 rom
sda 8:0 0 200G 0 disk
├─sda2 8:2 0 199.8G 0 part /
└─sda1 8:1 0 200M 0 part /boot3.4.4 客户端挂载使用
#初始化磁盘
[rootxianchaonode1 ~]# mkfs.xfs /dev/rbd0
Discarding blocks...Done.
meta-data/dev/rbd0 isize512 agcount8, agsize98304 blks sectsz512 attr2, projid32bit1 crc1 finobt0, sparse0
data bsize4096 blocks786432, imaxpct25 sunit16 swidth16 blks
naming version 2 bsize4096 ascii-ci0 ftype1
log internal log bsize4096 blocks2560, version2 sectsz512 sunit16 blks, lazy-count1
realtime none extsz4096 blocks0, rtextents0
[rootxianchaonode1 ~]# mount /dev/rbd0 /mnt/
[rootxianchaonode1 ~]# echo 111 /mnt/test.txt
[rootxianchaonode1 ~]# cat /mnt/test.txt
111
[rootxianchaonode1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.9G 0 7.9G 0% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 7.9G 795M 7.1G 10% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda2 200G 62G 138G 31% /
tmpfs 1.6G 0 1.6G 0% /run/user/0
/dev/rbd0 3.0G 33M 3.0G 2% /mnt
[rootxianchaonode1 ~]# 4.CephFS使用详解
ceph FS 即 ceph filesystem可以实现文件系统共享功能(POSIX 标准), 客户端通过 ceph协议挂载并使用 ceph 集群作为数据存储服务器http://docs.ceph.org.cn/cephfs/。 Ceph FS 需要运行 Meta Data Services(MDS)服务其守护进程为 ceph-mdsceph-mds 进程管理与 cephFS 上存储的文件相关的元数据并协调对 ceph 存储集群的访问。 在linux系统使用 ls 等操作查看某个目录下的文件的时候会有保存在磁盘上的分区表 记录文件的名称、创建日期、大小、inode 及存储位置等元数据信息在 cephfs 由于数据 是被打散为若干个离散的 object 进行分布式存储因此并没有统一保存文件的元数据而且将文件的元数据保存到一个单独的存储出 matedata pool但是客户端并不能直接访问 matedata pool 中的元数据信息而是在读写数的时候有 MDS(matadata server)进行处理 读数据的时候有 MDS从 matedata pool加载元数据然后缓存在内存(用于后期快速响应其它 客户端的请求)并返回给客户端写数据的时候有MDS 缓存在内存并同步到matedata pool。4.1 部署MDS服务
如果要使用 cephFS需要部署 MDS 服务可以部署在mon节点
rootceph-master01:~# apt-cache madison ceph-mds
rootceph-master01:~# apt install ceph-mds
rootceph-master01:~# ceph-deploy mds create ceph-master01 ceph-master02 ceph-master03#检查主从状态
ceph -s
ceph fs status4.2 创建CephFS mdetadata和data存储池
使用 CephFS 之前需要事先于集群中创建一个文件系统并为其分别指定元数据和数据相关的存储池。下面创建一个名为 cephfs 的文件系统用于测试它使用 cephfs-metadata 为 元数据存储池使用 cephfs-data 为数据存储池.
rootceph-master01:~# ceph osd pool create cephfs-metadata 32 32
pool cephfs-metadata created
rootceph-master01:~# ceph osd pool create cephfs-data 64 64
pool cephfs-data created4.3 创建 cephFS 并验证
rootceph-master01:~# ceph fs new mycephfs cephfs-metadata cephfs-data
new fs with metadata pool 5 and data pool 6
rootceph-master01:~# ceph fs ls
name: mycephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
rootceph-master01:~# ceph fs status mycephfs
mycephfs - 0 clients
POOL TYPE USED AVAIL
cephfs-metadata metadata 0 142G cephfs-data data 0 142G 4.4 创建cephFS客户端账户
#创建账户
rootceph-master01:/etc/ceph-cluster# ceph auth add client.yanyan mon allow r mds allow rw osd allow rwx poolcephfs-data
added key for client.yanyan#验证账户
rootceph-master01:/etc/ceph-cluster# ceph auth get client.yanyan
[client.yanyan]key AQDnhSljvlhoLxAAWrV9uY1kXq5/C0jAziaB9Qcaps mds allow rwcaps mon allow rcaps osd allow rwx poolcephfs-data
exported keyring for client.yanyan
rootceph-master01:/etc/ceph-cluster# ceph auth get client.yanyan -o ceph.client.yanyan.keyring
exported keyring for client.yanyanrootceph-master01:/etc/ceph-cluster# ll
total 416
drwxr-xr-x 2 cephadmin cephadmin 4096 Sep 20 17:21 ./
drwxr-xr-x 92 root root 4096 Sep 5 09:46 ../
-rw------- 1 cephadmin cephadmin 113 Sep 5 10:52 ceph.bootstrap-mds.keyring
-rw------- 1 cephadmin cephadmin 113 Sep 5 10:52 ceph.bootstrap-mgr.keyring
-rw------- 1 cephadmin cephadmin 113 Sep 5 10:52 ceph.bootstrap-osd.keyring
-rw------- 1 cephadmin cephadmin 113 Sep 5 10:52 ceph.bootstrap-rgw.keyring
-rw------- 1 cephadmin cephadmin 151 Sep 5 10:52 ceph.client.admin.keyring
-rw-r--r-- 1 root root 150 Sep 20 17:21 ceph.client.yanyan.keyring
-rw-rw-r-- 1 cephadmin cephadmin 398 Sep 7 20:01 ceph.conf
-rw-rw-r-- 1 cephadmin cephadmin 368945 Sep 7 20:02 ceph-deploy-ceph.log
-rw------- 1 cephadmin cephadmin 73 Sep 2 16:50 ceph.mon.keyring
-rw-r--r-- 1 root root 9 Sep 12 13:06 pass.txt
-rw-r--r-- 1 root root 1645 Oct 16 2015 release.asc
rootceph-master01:/etc/ceph-cluster# ceph auth print-key client.yanyan yanyan.key
rootceph-master01:/etc/ceph-cluster# ll
total 420
drwxr-xr-x 2 cephadmin cephadmin 4096 Sep 20 17:21 ./
drwxr-xr-x 92 root root 4096 Sep 5 09:46 ../
-rw------- 1 cephadmin cephadmin 113 Sep 5 10:52 ceph.bootstrap-mds.keyring
-rw------- 1 cephadmin cephadmin 113 Sep 5 10:52 ceph.bootstrap-mgr.keyring
-rw------- 1 cephadmin cephadmin 113 Sep 5 10:52 ceph.bootstrap-osd.keyring
-rw------- 1 cephadmin cephadmin 113 Sep 5 10:52 ceph.bootstrap-rgw.keyring
-rw------- 1 cephadmin cephadmin 151 Sep 5 10:52 ceph.client.admin.keyring
-rw-r--r-- 1 root root 150 Sep 20 17:21 ceph.client.yanyan.keyring
-rw-rw-r-- 1 cephadmin cephadmin 398 Sep 7 20:01 ceph.conf
-rw-rw-r-- 1 cephadmin cephadmin 368945 Sep 7 20:02 ceph-deploy-ceph.log
-rw------- 1 cephadmin cephadmin 73 Sep 2 16:50 ceph.mon.keyring
-rw-r--r-- 1 root root 9 Sep 12 13:06 pass.txt
-rw-r--r-- 1 root root 1645 Oct 16 2015 release.asc
-rw-r--r-- 1 root root 40 Sep 20 17:21 yanyan.keyrootceph-master01:/etc/ceph-cluster# cat ceph.client.yanyan.keyring
[client.yanyan]key AQDnhSljvlhoLxAAWrV9uY1kXq5/C0jAziaB9Qcaps mds allow rwcaps mon allow rcaps osd allow rwx poolcephfs-data
rootceph-master01:/etc/ceph-cluster# 4.5 安装ceph客户端
#以centos客户端
yum install epel-release -y
yum install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
yum install ceph-common -y4.6 同步认证文件
rootceph-master01:~# cd /etc/ceph-cluster/
rootceph-master01:/etc/ceph-cluster# scp ceph.conf ceph.client.yanyan.keyring yanyan.key root172.26.156.165:/etc/ceph/客户端权限认证
[rootzd_spring_156_101 ceph]# ceph --user yanyan -s 4.7 客户端安装ceph-common
#配置 yum 源
$ yum install epel-release
$ yum install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y
#下载ceph-common
$ yum install -y ceph-common
#验证ceph-common
[rootzd_spring_156_101 ~]# rpm -qa | grep ceph-common
python3-ceph-common-15.2.17-0.el7.x86_64
ceph-common-15.2.17-0.el7.x86_644.8 cephfs挂载使用
客户端挂载有两种方式一是内核空间一是用户空间内核空间挂载需要内核支持 ceph模块内核版本3.10.0-862以上centos7.5默认内核用户空间挂载需要安装 ceph-fuse如果内核本较低而没有 ceph 模块验证centos7.5及以上默认内核基本都有ceph模块centos7.3以下默认内核未测试那么可以安装 ceph-fuse 挂载但是推荐使用内核模块挂载。
4.8.1 内核空间挂载使用ceph-fs
#客户端通过 key 挂载(不需要安装ceph-common)
[rootother165 ~]# cat /etc/ceph/yanyan.key
AQDnhSljvlhoLxAAWrV9uY1kXq5/C0jAziaB9Q
[rootother165 ~]# mount -t ceph 172.26.156.217:6789,172.26.156.218:6789,172.26.156.219:6789:/ /mnt -o nameyanyan,secretAQDnhSljvlhoLxAAWrV9uY1kXq5/C0jAziaB9Q#客户端通过 key 文件挂载需要安装ceph-common)
[rootother165 ~]# mount -t ceph 172.26.156.217:6789,172.26.156.218:6789,172.26.156.219:6789:/ /mnt -o nameyanyan,secretfile/etc/ceph/yanyan.key4.8.2 开机自动挂载
# cat /etc/fstab
172.26.156.217:6789,172.26.156.218:6789,172.26.156.219:6789:/ /mnt ceph defaults,nameyanyan,secretfile/etc/ceph/yanyan.key,_netdev 0 04.9用户空间挂载 ceph-fs
如果内核本较低而没有 ceph 模块那么可以安装 ceph-fuse 挂载但是推荐使用内核模块挂载。
4.9.1安装 ceph-fuse
#配置 yum 源
$ yum install epel-release
$ yum install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y
#下载ceph-common
$ yum install ceph-fuse -y4.9.2 ceph-fuse 挂载 ceph
#默认读取/etc/ceph/下
ceph-fuse --name client.yanyan -m 172.26.156.217:6789,172.26.156.218:6789,172.26.156.219:6789 /mnt4.9.3 开机自动挂载
指定用户会自动根据用户名称加载授权文件及配置文件 ceph.conf
vim /etc/fstab
none /data fuse.ceph ceph.idyanyan,ceph.conf/etc/ceph/ceph.conf,_netdev,defaults 0 05.k8s使用ceph案例
5.1 RBD静态存储
5.1.1 使用pv/pvc挂载RBD
apiVersion: v1
kind: PersistentVolume
metadata: name: ceph-pv
spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce rbd: monitors: - 172.26.156.217:6789- 172.26.156.218:6789- 172.26.156.219:6789 pool: k8stest #需要创建image: rbda #需要创建user: admin #需要创建secretRef: name: ceph-secret #需要创建 fsType: xfs readOnly: false persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata: name: ceph-pvc
spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi5.1.2 直接使用pod挂载RBD
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deployment
spec:replicas: 1selector:matchLabels: #rs or deploymentapp: ng-deploy-80template:metadata:labels:app: ng-deploy-80spec:nodeName: xianchaonode1containers:- name: ng-deploy-80image: nginxports:- containerPort: 80volumeMounts:- name: rbd-data1mountPath: /usr/share/nginx/html/rbdvolumes:- name: rbd-data1rbd:monitors:- 172.26.156.217:6789- 172.26.156.218:6789- 172.26.156.219:6789 pool: shijie-rbd-pool1image: shijie-img-img1fsType: xfsreadOnly: falseuser: magedu-shijiesecretRef:name: ceph-secret-magedu-shijie5.1 RBD动态存储类
存储卷可以通过 kube-controller-manager 组件动态创建适用于有状态服务需要多个存储卷的场合。 将 ceph admin 用户 key 文件定义为 k8s secret用于 k8s 调用 ceph admin 权限动态创建存储卷即不再需要提前创建好 image 而是 k8s 在需要使用的时候再调用 ceph 创建。
5.1.1 创建rbd pool
rootceph-master01:/etc/ceph# ceph osd pool create k8s-rbd 32 32
pool k8s-rbd created
rootceph-master01:/etc/ceph# ceph osd pool application enable k8s-rbd rbd
enabled application rbd on pool k8s-rbd
rootceph-master01:/etc/ceph# rbd pool init -p k8s-rbd 5.1.2 创建 admin 用户 secret
用于k8s有权限创建rbd
#查看ceph adminbase64密钥
rootceph-master01:/etc/ceph# ceph auth print-key client.admin | base64
QVFCM1pCVmpMOE4wRUJBQVJlRzBxM3JwVkYvOERkbk11cnlaTkE9PQ#ceph admin 用户 secret 文件内容
[rootxianchaomaster1 pod-rbd]# vi case1-secret-admin.yaml
apiVersion: v1
kind: Secret
metadata:name: ceph-secret-adminnamespace: default
type: kubernetes.io/rbd
data:key: QVFCM1pCVmpMOE4wRUJBQVJlRzBxM3JwVkYvOERkbk11cnlaTkE9PQ5.1.3 创建普通用户的 secret
用于访问存储卷进行数据读写
rootceph-master01:/etc/ceph# ceph auth get-or-create client.k8s-rbd mon allow r osd allow * poolk8s-rbd
[client.k8s-rbd]key AQAMgkZjDyhsMhAAEH8F0Gwe3LaiP/wAkqdyA
rootceph-master01:/etc/ceph# ceph auth print-key client.k8s-rbd | base64
QVFBTWdrWmpEeWhzTWhBQUVIOEYwR3dlM0wrYWlQL3dBa3FkeUE9PQvi case2-secret-client.yaml
apiVersion: v1
kind: Secret
metadata:name: k8s-rbd
type: kubernetes.io/rbd
data:key: QVFBTWdrWmpEeWhzTWhBQUVIOEYwR3dlM0wrYWlQL3dBa3FkeUE9PQ5.1.4 创建存储类
创建动态存储类为pod提供动态pv
vi case3-ceph-storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: ceph-storage-classannotations:storageclass.kubernetes.io/is-default-class: false #设置为默认存储类
provisioner: kubernetes.io/rbd
reclaimPolicy: Retain #默认是Delete,危险
parameters:monitors: 172.26.156.217:6789,172.26.156.218:6789,172.26.156.219:6789adminId: adminadminSecretName: ceph-secret-adminadminSecretNamespace: defaultpool: k8s-rbduserId: k8s-rbduserSecretName: k8s-rbd5.1.5 创建基于存储类的PVC
vi case4-mysql-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: mysql-data-pvc
spec:accessModes:- ReadWriteOncestorageClassName: ceph-storage-classresources:requests:storage: 5Gi#验证 PV/PVC
kubectl get pvc
kubectl get pv#验证ceph是否自动创建image
rbd ls --pool k8s-rbd5.1.6 运行单机mysql pod验证
vi case5-mysql-deploy-svc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: mysql
spec:selector:matchLabels:app: mysqlstrategy:type: Recreatetemplate:metadata:labels:app: mysqlspec:containers:- image: mysql:5.6.46name: mysqlenv:# Use secret in real usage- name: MYSQL_ROOT_PASSWORDvalue: 123456ports:- containerPort: 3306name: mysqlvolumeMounts:- name: mysql-persistent-storagemountPath: /var/lib/mysqlvolumes:- name: mysql-persistent-storagepersistentVolumeClaim:claimName: mysql-data-pvc
---
kind: Service
apiVersion: v1
metadata:labels:app: mysql-service-labelname: mysql-service
spec:type: NodePortports:- name: httpport: 3306protocol: TCPtargetPort: 3306nodePort: 33306selector:app: mysql#连接验证创建test库 #删除mysql pod 重新创建验证rbd数据持久 kubectl delete -f case5-mysql-deploy-svc.yamlkubectl apply -f case5-mysql-deploy-svc.yaml#将pod调度到指定的其他node节点验证能否挂载rbd kubectl delete -f case5-mysql-deploy-svc.yamlkubectl apply -f case5-mysql-deploy-svc.yaml依然可以挂载成功 5.2 cephFS静态存储
5.2.1 使用pv/pvc挂载cephFS
注意的是一个cephFS pool共享多个目录需要在cephfs中提前创建好子目录分给各个deployment挂载找一台linux主机提前挂载此cephfs创建/data2目录不然pod只能挂载cepfFS的/目录,mount -t ceph 172.26.156.217:6789,172.26.156.218:6789,172.26.156.219:6789:/ /mnt -o nameadmin,secretAQB3ZBVjL8N0EBAAReG0q3rpVF/8DdnMuryZNA
#创建pv
apiVersion: v1
kind: PersistentVolume
metadata:name: cephfs-pvlabels:app: static-cephfs-pv
spec:capacity:storage: 1GiaccessModes:- ReadWriteManycephfs:monitors:- 172.26.156.217:6789- 172.26.156.218:6789- 172.26.156.219:6789path: /data2/ #需要提前在cephFS pool中创建好/data2user: adminsecretRef:name: ceph-secret-adminreadOnly: falsepersistentVolumeReclaimPolicy: RecyclestorageClassName: slow
---
#创建pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: cephfs-pvc-claim
spec:selector:matchLabels:app: static-cephfs-pvstorageClassName: slowaccessModes:- ReadWriteManyresources:requests:storage: 1Gi
---
#deployment
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx2
spec:selector:matchLabels:k8s-app: nginx2replicas: 2template:metadata:labels:k8s-app: nginx2spec:containers:- name: nginximage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80protocol: TCPvolumeMounts:- name: pvc-recyclemountPath: /usr/share/nginx/html/nginx2volumes:- name: pvc-recyclepersistentVolumeClaim:claimName: cephfs-pvc-claim
---
kind: Service
apiVersion: v1
metadata:labels:k8s-app: nginx2name: ng-deploy-80-service
spec:type: NodePortports:- name: httpport: 80protocol: TCPtargetPort: 80nodePort: 23380selector:k8s-app: nginx25.2 直接使用pod挂载cephFS
不需要创建pv/pvc直接创建deployment挂载cephFS
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deployment
spec:replicas: 2selector:matchLabels: #rs or deploymentapp: ng-deploy-80template:metadata:labels:app: ng-deploy-80spec:containers:- name: ng-deploy-80image: nginxports:- containerPort: 80volumeMounts:- name: magedu-staticdata-cephfsmountPath: /usr/share/nginx/html/cephfsvolumes:- name: magedu-staticdata-cephfscephfs:monitors:- 172.26.156.217:6789- 172.26.156.218:6789- 172.26.156.219:6789path: /user: adminsecretRef:name: ceph-secret-admin
---
kind: Service
apiVersion: v1
metadata:labels:app: ng-deploy-80-service-labelname: ng-deploy-80-service
spec:type: NodePortports:- name: httpport: 80protocol: TCPtargetPort: 80nodePort: 33380selector:app: ng-deploy-805.4 cephFS动态存储类
虽然官方并没有直接提供对Cephfs StorageClass的支持但是社区给出了类似的解决方案 external-storage/ cephfs。
测试发现Cephfs StorageClass k8s1.20版本之后已经不能使用。按照这种方式会报错以下截图网上的解决方案需要在kube-apiserver.yaml配置文件中添加–feature-gatesRemoveSelfLinkfalse这个参数在k8s1.20版本之后已经移除后续使用ceph-csi方式。
Cephfs StorageClass部署方案不成功
https://www.cnblogs.com/leffss/p/15630641.html
https://www.cnblogs.com/estarhaohao/p/15965785.html
github issues: https://github.com/kubernetes/kubernetes/issues/94660 5.5 ceph-csi 动态存储
Ceph-CSI RBD
https://www.modb.pro/db/137721
Ceph-CSI CephFS
最新版本3.7遇到问题遂使用CSI-3.4版本
git clone https://github.com/ceph/ceph-csi.git -b release-v3.4
cd ceph-csi/deploy/cephfs/kubernetes修改 ConfigMap 对象配置clusterID 是 ceph fsid 。
vi csi-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:config.json: |-[{clusterID: f69afe6f-e559-4df7-998a-c5dc3e300209,monitors: [172.26.156.217:6789,172.26.156.218:6789,172.26.156.219:6789]}]
metadata:name: ceph-csi-configceph-csi 默认部署在 default 命名空间这里改到 kube-system 。
sed -i s/namespace: default/namespace: kube-system/g $(grep -rl namespace: default ./)部署 ceph-csi CephFS 镜像的仓库是 k8s.gcr.io , 部分镜像拉取失败可在dockerhub上search替换
kubectl get po -n kube-system | grep csi-cephfs
csi-cephfsplugin-8xt97 3/3 Running 0 6d10h
csi-cephfsplugin-bmxwr 3/3 Running 0 6d10h
csi-cephfsplugin-n74cd 3/3 Running 0 6d10h
csi-cephfsplugin-provisioner-79d84c9598-fb6bg 6/6 Running 0 6d10h
csi-cephfsplugin-provisioner-79d84c9598-g579j 6/6 Running 0 6d10h
csi-cephfsplugin-provisioner-79d84c9598-n8w2j 6/6 Running 0 6d10h***创建 Ceph*FS storageClass
ceph-csi 需要 cephx 凭据才能与 Ceph 集群通信这里使用的是 admin 用户。
vi secret.yaml
apiVersion: v1
kind: Secret
metadata:name: csi-cephfs-secretnamespace: default
stringData:adminID: adminadminKey: AQB3ZBVjL8N0EBAAReG0q3rpVF/8DdnMuryZNA创建 storageClass 对象这里使用的 Ceph FS name 是 mycephfs(ceph中新建cephfs时的名字他不是一个pool) 。
#ceph fs new mycephfs cephfs-metadata cephfs-data
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: ceph-csi-cephfs
provisioner: cephfs.csi.ceph.com
parameters:clusterID: f69afe6f-e559-4df7-998a-c5dc3e300209fsName: mycephfscsi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secretcsi.storage.k8s.io/provisioner-secret-namespace: defaultcsi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secretcsi.storage.k8s.io/controller-expand-secret-namespace: defaultcsi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secretcsi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:- discard创建 PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: csi-cephfs-pvcnamespace: default
spec:accessModes:- ReadWriteManyresources:requests:storage: 1GistorageClassName: ceph-csi-cephfs自动创建了pv并且pv/pvc绑定 创建测试的 Deployment
vi Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: cephfs-testlabels:component: cephfs-test
spec:replicas: 2strategy:type: Recreateselector:matchLabels:component: cephfs-testtemplate:metadata:labels:component: cephfs-testspec:containers:- name: nginximage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80volumeMounts:- name: configmountPath: /datavolumes:- name: configpersistentVolumeClaim:claimName: csi-cephfs-pvcreadOnly: falsecsi-cephfs 默认会创建一个名为 csi 的子文件系统
# ceph fs subvolumegroup ls cephfs
[{name: _deleting},{name: csi}
]所有使用 csi-cephfs 创建的 PV 都是在子文件系统 csi 的目录下 kubectl get pv | grep default/csi-cephfs-pvc
pvc-0f36fd44-40f1-4ac3-aebe-0264a2fb50ea 1Gi RWX Delete Bound default/csi-cephfs-pvc ceph-csi-cephfs 6d11h# kubectl describe pv pvc-0f36fd44-40f1-4ac3-aebe-0264a2fb50ea | egrep subvolumeName|subvolumePathsubvolumeNamecsi-vol-056e44c5-eddf-11eb-a990-a63fe71a40b6subvolumePath/volumes/csi/csi-vol-056e44c5-eddf-11eb-a990-a63fe71a40b6/e423daf3-017b-4a7e-8713-bd05bab695ee# cd /mnt/cephfs-test/
# tree -L 4 ./
./
└── volumes├── csi│ ├── csi-vol-056e44c5-eddf-11eb-a990-a63fe71a40b6│ │ └── e423daf3-017b-4a7e-8713-bd05bab695ee│ └── csi-vol-1ac1f4c1-ef8a-11eb-a990-a63fe71a40b6│ └── 3773a567-a8cb-4bae-9181-38f4e3065436├── _csi:csi-vol-056e44c5-eddf-11eb-a990-a63fe71a40b6.meta├── _csi:csi-vol-1ac1f4c1-ef8a-11eb-a990-a63fe71a40b6.meta└── _deleting7 directories, 2 files理解
1.存储数据, object, pg,pgp, pool, osd, 存储磁盘的关系 2.Filestore、BlueStore、journal理解
ceph后端支持多种存储引擎我们常听到的其中两种存储引擎就是FileStore与BlueStore在L版包括L版之前默认使用的filestore作为默认的存储引擎但是由于FileStore存在一些缺陷重新设计开发了BlueStoreL版之后默认使用的存储引擎就是BlueStore了。
BlueStore 对于整块数据的写入数据直接 AIO 的方式写入磁盘避免了 filestore的先写日志后 apply到实际磁盘的两次写盘。对于随机IO直接 WAL 的形式直接写入 RocksDB 高性能的 KV 存储中。
所以我们在网上看到的各种优化资源文档部署ceph时需要给ceph单独设定一块ssd盘为Journal刷写日志但这种只针对L版本之前使用在L版本之后已经没有Journal预刷写日志了。L版本之后使用下面的优化方式
#服务器上有两块ssd盘时可以分别把block-dbblock-wal存放到ssd盘中
ceph-deploy osd create ceph-node1 --data /dev/sdc --block-db /dev/sda --block-wal /dev/sdb
#只有一块硬盘时只指定db的话存放ssd盘没有指定waf存放位置waf也会自动写到更快速的ssd盘上和db共用一块。
ceph-deploy osd create ceph-node1 --data /dev/sdb--block-db /dev/sda 需求
1.删除OSD的正确方式
Luminous 之前版本
1.调整osd的crush weight
调整osd的crush weight
ceph osd crush reweight osd.0 0.5
ceph osd crush reweight osd.0 0.2
ceph osd crush reweight osd.0 0说明这个地方如果想慢慢的调整就分几次将crush 的weight 减低到0 这个过程实际上是让数据不分布在这个节点上让数据慢慢的分布到其他节点上直到最终为没有分布在这个osd并且迁移完成。这个地方不光调整了osd 的crush weight 实际上同时调整了host 的 weight 这样会调整集群的整体的crush 分布在osd 的crush 为0 后 再对这个osd的任何删除相关操作都不会影响到集群的数据的分布。 停止osd进程 systemctl stop ceph-osd0.service停止osd的进程这个是通知集群这个osd进程不在了不提供服务了因为本身没权重就不会影响到整体的分布也就没有迁移。 将节点状态标记为out
ceph osd out osd.0将osd退出集群这个是通知集群这个osd不再映射数据了不提供服务了因为本身没权重就不会影响到整体的分布也就没有迁移。 从crush中移除节点
ceph osd crush remove osd.0这个是从crush中删除因为OSD权重已经是0了 所以没影响主机的权重也就没有迁移了。 删除节点
ceph osd rm osd.0这个是从集群里面删除这个OSD的记录。 删除OSD认证不删除编号会占住
ceph auth del osd.0这个是从认证当中去删除这个OSD的信息。 经过验证此种方式只触发了一次迁移虽然只是一个步骤先后上的调整对于生产环境的的集群来说迁移的量要少了一次实际生产环境当中节点是有自动out的功能这个可以考虑自己去控制只是监控的密度需要加大毕竟这个是一个需要监控的集群完全让其自己处理数据的迁移是不可能的带来的故障只会更多。
Luminous 之后版本
1.调整osd的crush weight
调整osd的crush weight
ceph osd crush reweight osd.0 0.5
ceph osd crush reweight osd.0 0.2
ceph osd crush reweight osd.0 0说明这个地方如果想慢慢的调整就分几次将crush 的weight 减低到0 这个过程实际上是让数据不分布在这个节点上让数据慢慢的分布到其他节点上直到最终为没有分布在这个osd并且迁移完成。这个地方不光调整了osd 的crush weight 实际上同时调整了host 的 weight 这样会调整集群的整体的crush 分布在osd 的crush 为0 后 再对这个osd的任何删除相关操作都不会影响到集群的数据的分布。 停止osd进程 systemctl stop ceph-osd0.service停止osd的进程这个是通知集群这个osd进程不在了不提供服务了因为本身没权重就不会影响到整体的分布也就没有迁移。 将节点状态标记为out
ceph osd out osd.0将osd退出集群这个是通知集群这个osd不再映射数据了不提供服务了因为本身没权重就不会影响到整体的分布也就没有迁移。 移除设备
ceph osd purge {id} --yes-i-really-mean-it若 OSD 的配置信息存在于 ceph.conf 配置文件中管理员在删除 OSD 之后手
动将其删除。
2.ceph 旧OSD 节点 格式化 数据加入新 ceph集群
旧有的ceph osd 想要格式化之后加入新的ceph节点。
查询 osd 旧有 数据后面的步骤会用到。[rootceph-207 ~]# ceph-volume lvm listosd.1
[block] /dev/ceph-58ef1d0f-272b-4273-82b1-689946254645/osd-block-e0efe172-778e-46e1-baa2-cd56408aac34block device /dev/ceph-58ef1d0f-272b-4273-82b1-689946254645/osd-block-e0efe172-778e-46e1-baa2-cd56408aac34
block uuid hCx4XW-OjKC-OC8Y-jEg2-NKYo-Pb6f-y9Nfl3
cephx lockbox secret
cluster fsid b7e4cb56-9cc8-4e44-ab87-24d4253d0951
cluster name ceph
crush device class None
encrypted 0
osd fsid e0efe172-778e-46e1-baa2-cd56408aac34
osd id 1
osdspec affinity
type block
vdo 0
devices /dev/sdb
直接加入集群报错ceph-volume lvm activate 1 e0efe172-778e-46e1-baa2-cd56408aac34
目前遇到了两类报错osd.1 21 heartbeat_check: no reply from 192.168.8.206:6804 osd.0 ever on either front or back, first ping sent 2020-11-26T16:00:04.8429470800 (oldest deadline 2020-11-26T16:00:24.8429470800)stderr: Calculated size of logical volume is 0 extents. Needs to be larger.-- Was unable to complete a new OSD, will rollback changes格式化数据重新加入新的ceph集群1、停止osd 服务 后面的 1 为 ceph-volume lvm list 命令查询出的 osd idsystemctl stop ceph-osd1
2、重处理 osd lvm 1 还是 osd idceph-volume lvm zap --osd-id 1
3、查询 lvs 信息 删除 lv、pg 等信息[rootceph-207 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
osd-block-e0efe172-778e-46e1-baa2-cd56408aac34 ceph-58ef1d0f-272b-4273-82b1-689946254645 -wi-a----- 16.00g
home cl -wi-ao---- 145.12g
root cl -wi-ao---- 50.00g
swap cl -wi-ao---- 3.88g
[rootceph-207 ~]# vgremove ceph-58ef1d0f-272b-4273-82b1-689946254645
Do you really want to remove volume group ceph-58ef1d0f-272b-4273-82b1-689946254645 containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume ceph-58ef1d0f-272b-4273-82b1-689946254645/osd-block-e0efe172-778e-46e1-baa2-cd56408aac34? [y/n]: y
Logical volume osd-block-e0efe172-778e-46e1-baa2-cd56408aac34 successfully remove
4、将主机上的磁盘重新加入新的ceph集群ceph-volume lvm create --data /dev/sdb
5、查询下 osd tree 磁盘挂载情况[rootceph-207 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.07997 root default
-3 0.03508 host ceph-206
0 hdd 0.01559 osd.0 up 1.00000 1.00000
1 hdd 0.01949 osd.1 up 1.00000 1.00000
-5 0.04489 host ceph-207
2 hdd 0.01559 osd.2 up 1.00000 1.00000
[rootceph-207 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 199G 0 part
├─cl-root 253:0 0 50G 0 lvm /
├─cl-swap 253:1 0 3.9G 0 lvm [SWAP]
└─cl-home 253:2 0 145.1G 0 lvm /home
sdb 8:16 0 16G 0 disk
└─ceph--c221ed63--d87a--4bbd--a503--d8f2ed9e806b-osd--block--530376b8--c7bc--4d64--bc0c--4f8692559562 253:3 0 16G 0 lvm
sr03.ceph如何修改配置文件
默认生成的ceph.conf文件如果需要改动的话需要加一些参数如果配置文件变化也是通过ceph-deploy进行推送。请不要直接修改某个节点的/etc/ceph/ceph.conf文件而是在部署机下修改ceph.conf采用推送的方式更加方便安全。 vi /etc/ceph-cluster/ceph.conf
[global]
fsid f69afe6f-e559-4df7-998a-c5dc3e300209
public_network 172.26.0.0/16
cluster_network 10.0.0.0/24
mon_initial_members ceph-master01, ceph-master02, ceph-master03
mon_host 172.26.156.217,172.26.156.218,172.26.156.219
auth_cluster_required cephx
auth_service_required cephx
auth_client_required cephx[mon]
mon_clock_drift_allowed 0.10
mon clock drift warn backoff 10亲测参数名称要不要下划线都可以 改完后将配置推到集群所有的机器
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy --overwrite-conf config push ceph-master01 ceph-master02 ceph-master03重启所有机器
rootceph-master01:~# systemctl restart ceph-monceph-master01.service
rootceph-master02:~# systemctl restart ceph-monceph-master02.service
rootceph-master03:~# systemctl restart ceph-monceph-master03.service查看集群状态
cephadminceph-master01:/etc/ceph-cluster# ceph -s cluster:id: f69afe6f-e559-4df7-998a-c5dc3e300209health: HEALTH_OKservices:mon: 3 daemons, quorum ceph-master01,ceph-master02,ceph-master03 (age 3m)mgr: ceph-master03(active, since 26h), standbys: ceph-master01, ceph-master02osd: 9 osds: 9 up (since 26h), 9 in (since 27h)data:pools: 2 pools, 33 pgsobjects: 1 objects, 100 MiBusage: 370 MiB used, 450 GiB / 450 GiB availpgs: 33 activeclean报错记录
1. bash: python2: command not found 原因ceph-master02节点没有安装python2.7
解决方案
cephadminceph-master01:~$ sudo apt install python2.7 -y
cephadminceph-master01:~$ sudo ln -sv /usr/bin/python2.7 /usr/bin/python22.[ceph_deploy][ERROR ] RuntimeError: AttributeError: module ‘platform’ has no attribute ‘linux_distribution’
# ceph-deploy new --cluster-network 10.0.0.0/24 --public-network 172.26.0.0/16 master01 原因
部署cepe的操作系统为ubuntu20.04该版本python3.7后不再支持platform.linux_distribution
解决办法
修改/usr/lib/python3/dist-packages/ceph_deploy/hosts/remotes.py文件为如下所示
def platform_information(_linux_distributionNone): detect platform information from remote host linux_distribution _linux_distribution or platform.linux_distributiondistro, release, codename linux_distribution()distro release codename Nonetry:linux_distribution _linux_distribution or platform.linux_distributiondistro, release, codename linux_distribution()except AttributeError:pass修改前 修改后 3.apt-cache madison ceph-deploy 为1.5.38的低版本
cephadminceph-master01:~$ sudo apt-cache madison ceph-deploy
ceph-deploy | 1.5.38-0ubuntu1 | http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 Packages
ceph-deploy | 1.5.38-0ubuntu1 | http://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe i386 Packages
cephadminceph-master01:~$ 4. RuntimeError: Failed to execute command: /usr/sbin/ceph-volume lvm zap /dev/sdd
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy disk zap ceph-master01 /dev/sdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph-master01 /dev/sdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : ceph_deploy.conf.cephdeploy.Conf instance at 0x7f8d53187280
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ceph-master01
[ceph_deploy.cli][INFO ] func : function disk at 0x7f8d5315d350
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [/dev/sdd]
[ceph_deploy.osd][DEBUG ] zapping /dev/sdd on ceph-master01
[ceph-master01][DEBUG ] connection detected need for sudo
[ceph-master01][DEBUG ] connected to host: ceph-master01
[ceph-master01][DEBUG ] detect platform information from remote host
[ceph-master01][DEBUG ] detect machine type
[ceph-master01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-master01][DEBUG ] zeroing last few blocks of device
[ceph-master01][DEBUG ] find the location of an executable
[ceph-master01][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sdd
[ceph-master01][WARNIN] -- Zapping: /dev/sdd
[ceph-master01][WARNIN] -- Zapping lvm member /dev/sdd. lv_path is /dev/ceph-657ad072-e5d8-4812-a561-19cac0b02e0c/osd-block-b05d277c-b899-4f81-9d00-e7a8cf46b357
[ceph-master01][WARNIN] Running command: /bin/dd if/dev/zero of/dev/ceph-657ad072-e5d8-4812-a561-19cac0b02e0c/osd-block-b05d277c-b899-4f81-9d00-e7a8cf46b357 bs1M count10 convfsync
[ceph-master01][WARNIN] stderr: 100 records in
[ceph-master01][WARNIN] 100 records out
[ceph-master01][WARNIN] stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0201594 s, 520 MB/s
[ceph-master01][WARNIN] -- --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] -- RuntimeError: could not complete wipefs on device: /dev/sdd
[ceph-master01][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-volume lvm zap /dev/sdd原因该osd没有完全剔除不能zap擦除
解决方案1.取消磁盘挂载可能是挂载正在使用中2.完全移除osd
5. 使用zap擦除格式化磁盘时报错[ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-volume lvm zap /dev/sdd
cephadminceph-master01:/etc/ceph-cluster# ceph-deploy disk zap ceph-master01 /dev/sdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph-master01 /dev/sdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : ceph_deploy.conf.cephdeploy.Conf instance at 0x7f8880b7c280
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ceph-master01
[ceph_deploy.cli][INFO ] func : function disk at 0x7f8880b52350
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [/dev/sdd]
[ceph_deploy.osd][DEBUG ] zapping /dev/sdd on ceph-master01
[ceph-master01][DEBUG ] connection detected need for sudo
[ceph-master01][DEBUG ] connected to host: ceph-master01
[ceph-master01][DEBUG ] detect platform information from remote host
[ceph-master01][DEBUG ] detect machine type
[ceph-master01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph-master01][DEBUG ] zeroing last few blocks of device
[ceph-master01][DEBUG ] find the location of an executable
[ceph-master01][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sdd
[ceph-master01][WARNIN] -- Zapping: /dev/sdd
[ceph-master01][WARNIN] -- Zapping lvm member /dev/sdd. lv_path is /dev/ceph-657ad072-e5d8-4812-a561-19cac0b02e0c/osd-block-b05d277c-b899-4f81-9d00-e7a8cf46b357
[ceph-master01][WARNIN] Running command: /bin/dd if/dev/zero of/dev/ceph-657ad072-e5d8-4812-a561-19cac0b02e0c/osd-block-b05d277c-b899-4f81-9d00-e7a8cf46b357 bs1M count10 convfsync
[ceph-master01][WARNIN] stderr: 100 records in
[ceph-master01][WARNIN] 100 records out
[ceph-master01][WARNIN] 10485760 bytes (10 MB, 10 MiB) copied, 0.0244706 s, 429 MB/s
[ceph-master01][WARNIN] stderr:
[ceph-master01][WARNIN] -- --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] stderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
[ceph-master01][WARNIN] -- failed to wipefs device, will try again to workaround probable race condition
[ceph-master01][WARNIN] -- RuntimeError: could not complete wipefs on device: /dev/sdd
[ceph-master01][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-volume lvm zap /dev/sdd解决办法
手动执行命令擦除磁盘,Device or resource busy,说明磁盘正在使用中
cephadminceph-master01:/etc/ceph-cluster# sudo /usr/sbin/ceph-volume lvm zap /dev/sdd
-- Zapping: /dev/sdd
-- Zapping lvm member /dev/sdd. lv_path is /dev/ceph-657ad072-e5d8-4812-a561-19cac0b02e0c/osd-block-b05d277c-b899-4f81-9d00-e7a8cf46b357
Running command: /bin/dd if/dev/zero of/dev/ceph-657ad072-e5d8-4812-a561-19cac0b02e0c/osd-block-b05d277c-b899-4f81-9d00-e7a8cf46b357 bs1M count10 convfsyncstderr: 100 records in
100 records outstderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0241062 s, 435 MB/s
-- --destroy was not specified, but zapping a whole device will remove the partition tablestderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
-- failed to wipefs device, will try again to workaround probable race conditionstderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
-- failed to wipefs device, will try again to workaround probable race conditionstderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
-- failed to wipefs device, will try again to workaround probable race conditionstderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
-- failed to wipefs device, will try again to workaround probable race conditionstderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
-- failed to wipefs device, will try again to workaround probable race conditionstderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
-- failed to wipefs device, will try again to workaround probable race conditionstderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
-- failed to wipefs device, will try again to workaround probable race conditionstderr: wipefs: error: /dev/sdd: probing initialization failed: Device or resource busy
-- failed to wipefs device, will try again to workaround probable race condition
-- RuntimeError: could not complete wipefs on device: /dev/sdd
方式一彻底清除磁盘
dd if/dev/zero of/dev/sdb bs512K count1
reboot方式二dmsetup移除
rootceph-master01:~# lsblk
sdi 8:128 0 447.1G 0 disk
└─ceph--3511f2c6--2be6--40fd--901d--3b75e433afa5-osd--block--ca994912--f215--4612--97fa--abe33b07985b253:7 0 447.1G 0 lvm
# dmsetup移除
rootceph-master01:~# dmsetup remove ceph--3511f2c6--2be6--40fd--901d--3b75e433afa5-osd--block--ca994912--f215--4612--97fa--abe33b07985b6. mons are allowing insecure global_id reclaim
如果AUTH_INSECURE_GLOBAL_ID_RECLAIM还没有引发健康警报并且auth_expose_insecure_global_id_reclaim尚未禁用该设置默认情况下处于启用状态则当前没有需要升级的客户端已连接可以安全地禁止不安全的global_id回收
ceph config set mon auth_allow_insecure_global_id_reclaim false
# 如果仍然有需要升级的客户端则可以使用以下方法暂时使此警报停止
ceph health mute AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED 1w # 1 week
# 不建议这样做但是您也可以无限期地禁用此警告
ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false7. 1 pool(s) do not have an application enabled
执行ceph -s 出现pool(s) do not have an application enabled告警 原因执行监控检查命令ceph health detail发现提示application not enabled on pool ‘mypool’ 意思是mypool存储池未设置任何应用rdbcephfs等设置一个即可
cephadminceph-master01:~# ceph osd pool application enable mypool rbd
enabled application rbd on pool mypool
cephadminceph-master01:~# ceph -s cluster:id: f69afe6f-e559-4df7-998a-c5dc3e300209health: HEALTH_WARNclock skew detected on mon.ceph-master02services:mon: 3 daemons, quorum ceph-master01,ceph-master02,ceph-master03 (age 26h)mgr: ceph-master03(active, since 26h), standbys: ceph-master01, ceph-master02osd: 9 osds: 9 up (since 25h), 9 in (since 26h)data:pools: 2 pools, 33 pgsobjects: 1 objects, 100 MiBusage: 370 MiB used, 450 GiB / 450 GiB availpgs: 33 activeclean8.clock skew detected on mon.ceph-master02
多次执行ntpdate ntp.aliyun.com 同步时间依然报警事件有误差 原因:
cephadminceph-master01:~# ceph health detail
HEALTH_WARN clock skew detected on mon.ceph-master02
[WRN] MON_CLOCK_SKEW: clock skew detected on mon.ceph-master02mon.ceph-master02 clock skew 0.0662306s max 0.05s (latency 0.00108482s)服务器误差大于0.05秒就会出现告警
解决办法
修改默认参数
mon clock drift allowed #监视器间允许的时钟漂移量
mon clock drift warn backoff #时钟偏移警告的退避指数
cephadminceph-master01:/etc/ceph-cluster# vi /etc/ceph/ceph.conf
[mon]
mon_clock_drift_allowed 0.10
mon clock drift warn backoff 10#重启所有mon节点
rootceph-master01:~# systemctl restart ceph-monceph-master01.service
rootceph-master02:~# systemctl restart ceph-monceph-master02.service
rootceph-master03:~# systemctl restart ceph-monceph-master03.service9. 低内核无法挂载ceph rbd块存储 环境
ceph 16.2.10
客户端 ceph-common 15.2.17
客户端内核 3.10.0-862
客户端操作系统 centos 7.5
所有的文章都告诉你 “低内核版本挂载rbd存储rbd创建时只开放layering特性就能挂载”
rbd create myimg2 --size 3G --pool myrbd1 --image-format 2 --image-feature layering但是低版本内核也分低版本,超低版本在执行挂载命令 rbd -p myrbd1 map myimg2
centos 7.5 操作系统默认没升级内核的3.10.0-862版本不行。
centos 7.7 操作系统默认没升级内核的3.10.0-1062.4.3版本可以。
都是3.10版本但小版本号必须要高于3.10.0-862
10. 1 filesystem is online with fewer MDS than max_mds 原因:
mds服务没有创建启动
解决办法
ceph-deploy mds create ceph-master01 ceph-master02 ceph-master03