ceph-rbd和cephfs使用

目录

1 用户权限管理和授权流程

用户管理功能可让 Ceph 集群管理员能够直接在 Ceph 集群中创建、更新和删除用户。

权限,此文件类似于 linux 系统的中的/etc/passwd 文件。

1.1 列出用户

[ceph@ceph-deploy ceph-cluster]$ ceph auth list 
installed auth entries: 

mds.ceph-mgr1 
	key: AQCOKqJfXRvWFhAAVCdkr5uQr+5tNjrIRcZhSQ== 
    caps: [mds] allow caps: [mon] allow profile mds 
    caps: [osd] allow rwx 
osd.0 
	key: AQAhE6Jf74HbEBAA/6PS57YKAyj9Uy8rNRb1BA== 
    caps: [mgr] allow profile osd 
    caps: [mon] allow profile osd
client.admin
	key: AQAGDKJfQk/dAxAA3Y+9xoE/p8in6QjoHeXmeg== 
	caps: [mds] allow * 
	caps: [mgr] allow * 
	caps: [mon] allow * 
	caps: [osd] allow *

注意:TYPE.ID 表示法

针对用户采用 TYPE.ID 表示法,例如 osd.0 指定是 osd 类并且 ID 为 0 的用户(节点),

client.admin 是 client 类型的用户,其 ID 为 admin,

另请注意,每个项包含一个 key=xxxx 项,以及一个或多个 caps 项。

可以结合使用-o 文件名选项和 ceph auth list 将输出保存到某个文件。

[ceph@ceph-deploy ceph-cluster]$ ceph auth list -o 123.key

1.2 用户管理

添加一个用户会创建用户名 (TYPE.ID)、机密密钥,以及包含在命令中用于创建该用户的所有能力,用户可使用其密钥向 Ceph 存储集群进行身份验证。用户的能力授予该用户在 Ceph monitor (mon)、Ceph OSD (osd) 或 Ceph 元数据服务器 (mds) 上进行读取、写入或执行的能力,可以使用以下几个命令来添加用户:

1.2.1 ceph auth add

此命令是添加用户的规范方法。它会创建用户、生成密钥,并添加所有指定的能力。

[ceph@ceph-deploy ceph-cluster]$ ceph auth -h 
auth add <entity> {<caps> [<caps>...]} 

#添加认证 key: 
[ceph@ceph-deploy ceph-cluster]$ ceph auth add client.tom mon 'allow r' osd 'allow rwx pool=mypool'
added key for client.tom 

#验证 key 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.tom 
exported keyring for client.tom 

[client.tom] 
	key = AQCErsdftuumLBAADUiAfQUI42ZlX1e/4PjpdA== 
	caps mon = "allow r" 
	caps osd = "allow rwx pool=mypool"

1.2.3 ceph auth get-or-create

ceph auth get-or-create 此命令是创建用户较为常见的方式之一,它会返回包含用户名(在方括号中)和密钥的密钥文,如果该用户已存在,此命令只以密钥文件格式返回用户名和密钥,还可以使用 -o 指定文件名选项将输出保存到某个文件

#创建用户 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get-or-create client.jack mon 'allow r' osd 'allow rwx pool=mypool' 
[client.jack] 
	key = AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ== 
	
#验证用户 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.jack 
exported keyring for client.jack 
[client.jack] 
	key = AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ== 
	caps mon = "allow r" 
	caps osd = "allow rwx pool=mypool" 

#再次创建用户 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get-or-create client.jack mon 'allow r' osd 'allow rwx pool=mypool' 
[client.jack] 
	key = AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ==

1.2.4 ceph auth get-or-create-key

此命令是创建用户并仅返回用户密钥,对于只需要密钥的客户端(例如 libvirt),此命令非常有用。如果该用户已存在,此命令只返回密钥。您可以使用 -o 文件名选项将输出保存到某个文件。

创建客户端用户时,可以创建不具有能力的用户。不具有能力的用户可以进行身份验证,但不能执行其他操作,此类客户端无法从监视器检索集群地图,但是,如果希望稍后再添加能力,可以使用 ceph auth caps 命令创建一个不具有能力的用户。

典型的用户至少对 Ceph monitor 具有读取功能,并对 Ceph OSD 具有读取和写入功能。此外,用户的 OSD 权限通常限制为只能访问特定的存储池。

[ceph@ceph-deploy ceph-cluster]$ ceph auth get-or-create-key client.jack mon 'allow r' osd 'allow rwx pool=mypool' 
AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ== # 用户有 key 就显示没有就创建

1.2.5 ceph auth print-key

只获取单个指定用户的 key 信息

[ceph@ceph-deploy ceph-cluster]$ ceph auth print-key client.jack

AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ==

1.2.6 修改用户能力

使用 ceph auth caps 命令可以指定用户以及更改该用户的能力,设置新能力会完全覆盖当前的能力,因此要加上之前的用户已经拥有的能和新的能力,如果看当前能力,可以运行 ceph auth get USERTYPE.USERID,如果要添加能力,使用以下格式时还需要指定现有能力:

例如:

#查看用户当前权限 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.jack 
exported keyring for client.jack 
[client.jack] 
    key = AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ== 
    caps mon = "allow r" 
    caps osd = "allow rwx pool=mypool"
    
#修改用户权限 
[ceph@ceph-deploy ceph-cluster]$ ceph auth caps client.jack mon 'allow r' osd 'allow rw pool=mypool' 
updated caps for client.jack    

#再次验证权限 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.jack 
exported keyring for client.jack 
[client.jack] 
    key = AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ== 
    caps mon = "allow r" 
    caps osd = "allow rw pool=mypool"

1.2.7 删除用户

要删除用户使用 ceph auth del TYPE.ID,其中 TYPE 是 client、osd、mon 或 mds 之一,ID 是用户名或守护进程的 ID。

[ceph@ceph-deploy ceph-cluster]$ ceph auth del client.tom

updated

1.3 密钥环管理

ceph 的秘钥环是一个保存了 secrets、keys、certificates 并且能够让客户端通认证访问 ceph的 keyring file(集合文件),一个 keyring file 可以保存一个或者多个认证信息,每一个 key 都有一个实体名称加权限,类型为:

{client、mon、mds、osd}.name

当客户端访问 ceph 集群时,ceph 会使用以下四个密钥环文件预设置密钥环设置:

/etc/ceph/<$cluster name>.<user $type>.<user $id>.keyring #保存单个用户的 keyring 
/etc/ceph/cluster.keyring #保存多个用户的 keyring 
/etc/ceph/keyring #未定义集群名称的多个用户的 keyring 
/etc/ceph/keyring.bin #编译后的二进制文件

1.3.1 通过秘钥环文件备份与恢复用户

使用 ceph auth add 等命令添加的用户还需要额外使用 ceph-authtool 命令为其创建用户秘钥环文件。

创建 keyring 文件命令格式:

ceph-authtool --create-keyring FILE

1.3.1.1 导出用户认证信息至 keyring 文件

将用户信息导出至 keyring 文件,对用户信息进行备份。

#创建用户: 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get-or-create client.user1 mon 'allow r' osd 'allow * pool=mypool' 
[client.user1] 
	key = AQAUUchfjpMqGRAARV6h0ofdDEneuaRnxuHjoQ== 
	
#验证用户 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.user1 
exported keyring for client.user1 
[client.user1] 
    key = AQAUUchfjpMqGRAARV6h0ofdDEneuaRnxuHjoQ== 
    caps mon = "allow r" 
    caps osd = "allow * pool=mypool" 

#创建 keyring 文件: 
[ceph@ceph-deploy ceph-cluster]$ ceph-authtool --create-keyring ceph.client.user1.keyring 

#验证 keyring 文件: 

[ceph@ceph-deploy ceph-cluster]$ cat ceph.client.user1.keyring 

#是个空文件 
[ceph@ceph-deploy ceph-cluster]$ file ceph.client.user1.keyring
ceph.client.user1.keyring: empty

#导出 keyring 至指定文件 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.user1 -o ceph.client.user1.keyring 
exported keyring for client.user1

#验证指定用户的 keyring 文件: 
[ceph@ceph-deploy ceph-cluster]$ cat ceph.client.user1.keyring 
[client.user1] 
    key = AQAUUchfjpMqGRAARV6h0ofdDEneuaRnxuHjoQ== 
    caps mon = "allow r" 
    caps osd = "allow * pool=mypool"    

在创建包含单个用户的密钥环时,通常建议使用 ceph 集群名称、用户类型和用户名及 keyring 来 命 名 , 并 将 其 保 存 在 /etc/ceph 目 录 中 。 例 如 为 client.user1 用 户 创 建ceph.client.user1.keyring。

1.3.1.2:keyring 文件恢复用户认证信息

可以使用 ceph auth import -i 指定 keyring 文件并导入到 ceph,其实就是起到用户备份和恢复的目的:

[ceph@ceph-deploy ceph-cluster]$ cat ceph.client.user1.keyring #验证用户的认证文件 [client.user1] 
    key = AQAKkgthpbdlIxAABO28D3eK5hTxRfx7Omhquw== 
    caps mon = "allow r" 
    caps osd = "allow * pool=mypool" 

[ceph@ceph-deploy ceph-cluster]$ ceph auth del client.user1 #演示误删除用户 
Updated

[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.user1 #确认用户被删除 
Error ENOENT: failed to find client.user1 in keyring

[ceph@ceph-deploy ceph-cluster]$ ceph auth import -i ceph.client.user1.keyring #导入用户 
imported keyring

[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.user1 #验证已恢复用户 
exported keyring for client.user1 
[client.user1] 
    key = AQAKkgthpbdlIxAABO28D3eK5hTxRfx7Omhquw== 
    caps mon = "allow r" 
    caps osd = "allow * pool=mypool"

1.3.2 秘钥环文件多用户

一个 keyring 文件中可以包含多个不同用户的认证文件。

#创建 keyring 文件: 
$ ceph-authtool --create-keyring ceph.client.user.keyring #创建空的 keyring 文件 
creating ceph.client.user.keyring

#把指定的 admin 用户的 keyring 文件内容导入到 user 用户的 keyring 文件: 
$ceph-authtool ./ceph.client.user.keyring --import-keyring ./ceph.client.admin.keyring 
importing contents of ./ceph.client.admin.keyring into ./ceph.client.user.keyring

#验证 keyring 文件: 
[ceph@ceph-deploy ceph-cluster]$ ceph-authtool -l ./ceph.client.user.keyring 
[client.admin] 
    key = AQAGDKJfQk/dAxAA3Y+9xoE/p8in6QjoHeXmeg== 
    caps mds = "allow *" 
    caps mgr = "allow *" 
    caps mon = "allow *" 
    caps osd = "allow *"

#再导入一个其他用户的 keyring:
[ceph@ceph-deploy ceph-cluster]$ ceph-authtool ./ceph.client.user.keyring --import-keyring ./ceph.client.user1.keyring 
importing contents of ./ceph.client.user1.keyring into ./ceph.client.user.keyring

#再次验证 keyring 文件是否包含多个用户的认证信息: 
[ceph@ceph-deploy ceph-cluster]$ ceph-authtool -l ./ceph.client.user.keyring 
[client.admin] 
    key = AQAGDKJfQk/dAxAA3Y+9xoE/p8in6QjoHeXmeg== 
    caps mds = "allow *" 
    caps mgr = "allow *" 
    caps mon = "allow *" 
    caps osd = "allow *" 
[client.user1] 
    key = AQAUUchfjpMqGRAARV6h0ofdDEneuaRnxuHjoQ== 
    caps mon = "allow r" 
    caps osd = "allow * pool=mypool"


2 用普通用户挂载rbd和cephfs

2.1 创建存储池

#创建存储池: 
magedu@ceph-deploy:~/ceph-cluster$ ceph osd pool create rbd-data1 32 32
pool 'rbd-data1' created 

#验证存储池: 
magedu@ceph-deploy:~/ceph-cluster$ ceph osd pool ls
device_health_metrics
myrbd1
.rgw.root
default.rgw.log
default.rgw.control
default.rgw.meta
cephfs-metadata
cephfs-data
rbd-data1

#在存储池启用 rbd: 
[ceph@ceph-deploy ceph-cluster]$ osd pool application enable <poolname> <app> {--yes-i-really-mean-it} enable use of an application <app> [cephfs,rbd,rgw] on pool <poolname> 

magedu@ceph-deploy:~/ceph-cluster$ ceph osd pool application enable rbd-data1 rbd
enabled application 'rbd' on pool 'rbd-data1'

#初始化 rbd: 
magedu@ceph-deploy:~/ceph-cluster$ rbd pool init -p rbd-data1

2.2 创建img镜像

rbd 存储池并不能直接用于块设备,而是需要事先在其中按需创建映像(image),并把映像文件作为块设备使用。rbd 命令可用于创建、查看及删除块设备相在的映像(image), 以及克隆映像、创建快照、将映像回滚到快照和查看快照等管理操作。例如,下面的命令能够在指定的 RBD 即 rbd-data1 创建一个名为 myimg1 的映像:

2.2.1 创建镜像

#创建两个镜像: 
$ rbd create data-img1 --size 3G --pool rbd-data1 --image-format 2 --image-feature layering 
$ rbd create data-img2 --size 5G --pool rbd-data1 --image-format 2 --image-feature layering

#验证镜像: 
$ rbd ls --pool rbd-data1 
data-img1 
data-img2

#列出镜像个多信息: 
$ rbd ls --pool rbd-data1 -l 
NAME SIZE PARENT FMT PROT LOCK 
data-img1 3 GiB 2 
data-img2 5 GiB 2

2.2.2 查看镜像详细信息

magedu@ceph-deploy:~/ceph-cluster$ rbd --image data-img2 --pool rbd-data1 info
rbd image 'data-img2':
	size 3 GiB in 768 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: 121e429921010
	block_name_prefix: rbd_data.121e429921010
	format: 2
	features: layering
	op_features: 
	flags: 
	create_timestamp: Sun Aug 29 20:31:03 2021
	access_timestamp: Sun Aug 29 20:31:03 2021
	modify_timestamp: Sun Aug 29 20:31:03 2021
	
$ rbd --image data-img1 --pool rbd-data1 info

2.2.3 以json格式显示

magedu@ceph-deploy:~/ceph-cluster$ rbd ls --pool rbd-data1 -l --format json --pretty-format
[
    {
        "image": "data-img1",
        "id": "121e1146bfbda",
        "size": 3221225472,
        "format": 2
    },
    {
        "image": "data-img2",
        "id": "121e429921010",
        "size": 3221225472,
        "format": 2
    }
]

2.2.4 镜像的其他特性

#特性简介 
layering: 支持镜像分层快照特性,用于快照及写时复制,可以对 image 创建快照并保护,然 后从快照克隆出新的 image 出来,父子 image 之间采用 COW 技术,共享对象数据。 

striping: 支持条带化 v2,类似 raid 0,只不过在 ceph 环境中的数据被分散到不同的对象中, 可改善顺序读写场景较多情况下的性能。 

exclusive-lock: 支持独占锁,限制一个镜像只能被一个客户端使用。 

object-map: 支持对象映射(依赖 exclusive-lock),加速数据导入导出及已用空间统计等,此特 性开启的时候,会记录 image 所有对象的一个位图,用以标记对象是否真的存在,在一些场 景下可以加速 io。 

fast-diff: 快速计算镜像与快照数据差异对比(依赖 object-map)。 

deep-flatten: 支持快照扁平化操作,用于快照管理时解决快照依赖关系等。 

journaling: 修改数据是否记录日志,该特性可以通过记录日志并通过日志恢复数据(依赖独 占锁),开启此特性会增加系统磁盘 IO 使用。

jewel 默认开启的特性包括: layering/exlcusive lock/object map/fast diff/deep flatten

2.2.5 镜像特性的启用

内核不支持存在无法挂载的问题

#启用指定存储池中的指定镜像的特性: 
$ rbd feature enable exclusive-lock --pool rbd-data1 --image data-img1 
$ rbd feature enable object-map --pool rbd-data1 --image data-img1
$ rbd feature enable fast-diff --pool rbd-data1 --image data-img1

-------------
#验证镜像特性: $ rbd --image data-img1 --pool rbd-data1 info

2.2.6 镜像特性的禁用

#禁用指定存储池中指定镜像的特性: 
$ rbd feature disable fast-diff --pool rbd-data1 --image data-img1

#验证镜像特性: 
$ rbd --image data-img1 --pool rbd-data1 info

2.3 客户端使用普通账户挂载并使用rbd

测试客户端使用普通账户挂载并使用 RBD

2.3.1 创建普通用户并授权

#创建普通账户 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth add client.shijie mon 'allow r' osd 'allow rwx pool=rbd-data1'
added key for client.shijie

#验证用户信息 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth get client.shijie
[client.shijie]
	key = AQCwfithlAyDEBAAG6dylI+XDcJ+21jcKMNtZQ==
	caps mon = "allow r"
	caps osd = "allow rwx pool=rbd-data1"
exported keyring for client.shijie
    
#创建用 keyring 文件 
magedu@ceph-deploy:~/ceph-cluster$ ceph-authtool --create-keyring ceph.client.shijie.keyring
creating ceph.client.shijie.keyring

#导出用户 keyring 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth get client.shijie -o ceph.client.shijie.keyring
exported keyring for client.shijie

#验证指定用户的 keyring 文件 
magedu@ceph-deploy:~/ceph-cluster$ cat ceph.client.shijie.keyring
[client.shijie]
	key = AQCwfithlAyDEBAAG6dylI+XDcJ+21jcKMNtZQ==
	caps mon = "allow r"
	caps osd = "allow rwx pool=rbd-data1"

2.3.2 安装ceph-common

Ubuntu:
~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add - 
~# vim /etc/apt/sources.list 
~# apt install ceph-common

Centos: 
[root@ceph-client2 ~]# yum install epel-release 
[root@ceph-client2 ~]# yum install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm 
[root@ceph-client2 ~]# yum install ceph-common

2.3.3 同步普通用户认证文件

magedu@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.shijie.keyring root@192.168.43.102:/etc/ceph/
The authenticity of host '192.168.43.102 (192.168.43.102)' can't be established.
ECDSA key fingerprint is SHA256:2lyoHBpFm5neq9RephfU/qVeXv9j/KGbyeJERycOFAU.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.43.102' (ECDSA) to the list of known hosts.
root@192.168.43.102's password: 
ceph.conf                                                                                     100%  266   549.1KB/s   00:00    
ceph.client.shijie.keyring                                                                    100%  125   448.4KB/s   00:00 

2.3.4 在客户端验证权限

root@ceph-mon2:~# cd /etc/ceph/ 
root@ceph-mon2:/etc/ceph# ls
ceph.client.admin.keyring  ceph.client.shijie.keyring  ceph.conf  rbdmap  tmpsNT_hI
root@ceph-mon2:/etc/ceph# ceph --user shijie -s #默认使用 admin 账户

2.3.5 映射rbd

使用普通用户权限映射 rbd

#映射 rbd 
root@ceph-mon2:/etc/ceph# rbd --user shijie -p rbd-data1 map data-img1
/dev/rbd0

#验证 rbd 
root@ceph-mon2:/etc/ceph# fdisk -l /dev/rbd0
root@ceph-mon2:/etc/ceph# lsblk 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  200G  0 disk 
└─sda1   8:1    0  200G  0 part /
sr0     11:0    1 1024M  0 rom  
rbd0   252:0    0    3G  0 disk 

2.3.6 格式化并使用rbd镜像

root@ceph-mon2:/etc/ceph# mkfs.ext4 /dev/rbd0 
root@ceph-mon2:/etc/ceph# mkdir /data 
root@ceph-mon2:/etc/ceph#  mount /dev/rbd0 /data/
root@ceph-mon2:/etc/ceph# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            962M     0  962M   0% /dev
tmpfs           198M  832K  197M   1% /run
/dev/sda1       196G  5.0G  181G   3% /
tmpfs           986M     0  986M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           986M     0  986M   0% /sys/fs/cgroup
tmpfs           198M     0  198M   0% /run/user/0
/dev/rbd0       2.9G  9.0M  2.8G   1% /datad   ###挂载成功

2.3.7 验证ceph内核模块

挂载 rbd 之后系统内核会自动加载 libceph.ko 模块

root@ceph-mon2:/etc/ceph# lsmod |grep ceph
libceph               315392  1 rbd
libcrc32c              16384  1 libceph
root@ceph-mon2:/etc/ceph# modinfo libceph
filename:       /lib/modules/4.15.0-154-generic/kernel/net/ceph/libceph.ko
license:        GPL
description:    Ceph core library
author:         Patience Warnick <patience@newdream.net>
author:         Yehuda Sadeh <yehuda@hq.newdream.net>
author:         Sage Weil <sage@newdream.net>
srcversion:     89A5EF37D4AA2C7E073D35B
depends:        libcrc32c
retpoline:      Y
intree:         Y
name:           libceph
vermagic:       4.15.0-154-generic SMP mod_unload modversions 
signat:         PKCS#7
signer:         
sig_key:        
sig_hashalgo:   md4

2.3.8 rdb 动态扩容

#管理端重新设置rdb大小
magedu@ceph-deploy:~/ceph-cluster$ rbd ls --pool rbd-data1
data-img1
data-img2
magedu@ceph-deploy:~/ceph-cluster$ rbd resize --pool rbd-data1 --size 10240 data-img1
Resizing image: 100% complete...done.

#在客户端确认
root@ceph-mon2:/etc/ceph# blockdev --getsize64 /dev/rbd0
10737418240
root@ceph-mon2:/etc/ceph# lsblk 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  200G  0 disk 
└─sda1   8:1    0  200G  0 part /
sr0     11:0    1 1024M  0 rom  
rbd0   252:0    0   10G  0 disk /data  ##可以看到已经扩展成功
root@ceph-mon2:/etc/ceph# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            962M     0  962M   0% /dev
tmpfs           198M  832K  197M   1% /run
/dev/sda1       196G  5.0G  181G   3% /
tmpfs           986M     0  986M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           986M     0  986M   0% /sys/fs/cgroup
tmpfs           198M     0  198M   0% /run/user/0
/dev/rbd0       2.9G  9.0M  2.8G   1% /data ##这里还是3G

#重新读取分区信息
root@ceph-mon2:/etc/ceph# resize2fs /dev/rbd0
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/rbd0 is mounted on /data; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/rbd0 is now 2621440 (4k) blocks long.

root@ceph-mon2:/etc/ceph# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            962M     0  962M   0% /dev
tmpfs           198M  832K  197M   1% /run
/dev/sda1       196G  5.0G  181G   3% /
tmpfs           986M     0  986M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           986M     0  986M   0% /sys/fs/cgroup
tmpfs           198M     0  198M   0% /run/user/0
/dev/rbd0       9.8G   14M  9.4G   1% /data

#此方法只对格式化为EXT4文件系统的块设备有效。对于XFS,要执行#xfs_growfs /dev/rbd0 
root@ceph-mon2:/etc/ceph# rbd --user shijie -p rbd-data1 map data-img2
/dev/rbd1
root@ceph-mon2:/etc/ceph# mkfs.xfs /dev/rbd1
root@ceph-mon2:/etc/ceph# mkdir /data1
root@ceph-mon2:/etc/ceph# mount /dev/rbd1 /data1
root@ceph-mon2:/etc/ceph# df -h
/dev/rbd0       9.8G   14M  9.4G   1% /data
/dev/rbd1       3.0G   36M  3.0G   2% /data1
magedu@ceph-deploy:~/ceph-cluster$ rbd resize --pool rbd-data1 --size 5120 data-img2
root@ceph-mon2:/etc/ceph# lsblk 
rbd0   252:0    0   10G  0 disk /data
rbd1   252:16   0    5G  0 disk /data1
root@ceph-mon2:/etc/ceph# xfs_growfs /dev/rbd1
root@ceph-mon2:/etc/ceph# df -h
/dev/rbd0       9.8G   14M  9.4G   1% /data
/dev/rbd1       5.0G   39M  5.0G   1% /data1 ##扩展成功

2.3.9 设置开机自动挂载

root@ceph-mon2:/etc/ceph# cat /etc/rc.d/rc.local 
rbd --user shijie -p rbd-data1 map data-img1 
mount /dev/rbd0 /data/

root@ceph-mon2:/etc/ceph# chmod a+x /etc/rc.d/rc.local 
root@ceph-mon2:/etc/ceph# reboot

#查看映射 
root@ceph-mon2:/etc/ceph#  rbd showmapped 
id  pool       namespace  image      snap  device   
0   rbd-data1             data-img1  -     /dev/rbd0
1   rbd-data1             data-img2  -     /dev/rbd1

#验证挂载 
root@ceph-mon2:/etc/ceph# df -TH
/dev/rbd0      ext4       11G   15M   11G   1% /data
/dev/rbd1      xfs       5.4G   40M  5.4G   1% /data1

2.3.10 卸载rbd镜像|取消映射

root@ceph-mon2:/etc/ceph# umount /data 
root@ceph-mon2:/etc/ceph# umount /data1 
root@ceph-mon2:/etc/ceph# rbd --user shijie -p rbd-data1 unmap data-img1
root@ceph-mon2:/etc/ceph# rbd --user shijie -p rbd-data1 unmap data-img2

镜像删除后数据也会被删除而且是无法恢复,因此在执行删除操作的时候要慎重。
#删除存储池 rbd -data1 中的 data-img1 镜像: 
magedu@ceph-deploy:~/ceph-cluster$ rbd rm --pool rbd-data1 --image data-img1 

2.4 普通用户挂载cephfs

Ceph FS 需要运行 Meta Data Services(MDS)服务,其守护进程为 ceph-mds,ceph-mds 进程管理与 cephFS 上存储的文件相关的元数据,并协调对 ceph 存储集群的访问

2.4.1 部署ceph-mds

root@ceph-mgr1:~# apt install ceph-mds

2.4.2 创建cephfs metadata和data存储池

使用 CephFS 之前需要事先于集群中创建一个文件系统,并为其分别指定元数据和数据相关的存储池。下面创建一个名为 mycephfs 的文件系统用于测试,它使用 cephfs-metadata 为元数据存储池,使用 cephfs-data 为数据存储池:

magedu@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-metadata 32 32
magedu@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-data 64 64
magedu@ceph-deploy:~/ceph-cluster$ ceph -s
magedu@ceph-deploy:~/ceph-cluster$ ceph fs new mycephfs cephfs-metadata cephfs-data  #创建一个名为 mycephfs 的文件系统
new fs with metadata pool 7 and data pool 8

magedu@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: mycephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
magedu@ceph-deploy:~/ceph-cluster$ ceph fs status mycephfs ##查看指定 cephFS 状态
mycephfs - 0 clients
========
RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mon2  Reqs:    0 /s    14     13     12      0   
 1    active  ceph-mon1  Reqs:    0 /s    10     13     11      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   252k  32.6G  
  cephfs-data      data       0   21.7G  
STANDBY MDS  
 ceph-mgr1   
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

2.4.3 验证cephfs服务状态

magedu@ceph-deploy:~/ceph-cluster$ ceph mds stat 
mycephfs:2 {0=ceph-mon2=up:active,1=ceph-mon1=up:active} 1 up:standby
#现在已经转变为活动状态

2.4.4 创建客户端账户

#创建账户 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth add client.yanyan mon 'allow r' mds 'allow rw' osd 'allow rwx pool=cephfs-data'
added key for client.yanyan

#验证账户 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth get client.yanyan 
[client.yanyan]
	key = AQDGiithR8i2MBAAaz7HqOni9NxCegRvSh4XZQ==
	caps mds = "allow rw"
	caps mon = "allow r"
	caps osd = "allow rwx pool=cephfs-data"
exported keyring for client.yanyan
	
#创建用 keyring 文件 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth get client.yanyan -o ceph.client.yanyan.keyring
exported keyring for client.yanyan

#创建 key 文件: 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth print-key client.yanyan > yanyan.key

#验证用户的 keyring 文件 
magedu@ceph-deploy:~/ceph-cluster$ cat ceph.client.yanyan.keyring
[client.yanyan]
	key = AQDGiithR8i2MBAAaz7HqOni9NxCegRvSh4XZQ==
	caps mds = "allow rw"
	caps mon = "allow r"
	caps osd = "allow rwx pool=cephfs-data"

2.4.5 同步客户端认证文件

magedu@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.yanyan.keyring yanyan.key root@192.168.43.102:/etc/ceph/
root@192.168.43.102's password: 
ceph.conf                                                                                     100%  266   808.6KB/s   00:00    
ceph.client.yanyan.keyring                                                                    100%  150   409.1KB/s   00:00    
yanyan.key                                                                                    100%   40   112.7KB/s   00:00 

2.4.6 客户端验证权限

# 如果未安装mds需要执行apt install ceph-mds 
root@ceph-mon2:/etc/ceph# ceph --user yanyan -s 
  cluster:
    id:     cce50457-e522-4841-9986-a09beefb2d65
    health: HEALTH_WARN
            1/3 mons down, quorum ceph-mon1,ceph-mon2
            Degraded data redundancy: 290/870 objects degraded (33.333%), 97 pgs degraded, 297 pgs undersized
            47 pgs not deep-scrubbed in time
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2 (age 64m), out of quorum: ceph-mon3
    mgr: ceph-mgr1(active, since 64m)
    mds: 2/2 daemons up, 1 standby
    osd: 7 osds: 5 up (since 64m), 5 in (since 7d)
    rgw: 1 daemon active (1 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   9 pools, 297 pgs
    objects: 290 objects, 98 MiB
    usage:   515 MiB used, 49 GiB / 50 GiB avail
    pgs:     290/870 objects degraded (33.333%)
             200 active+undersized
             97  active+undersized+degraded

2.4.7 内核空间挂载ceph-fs

客户端挂载有两种方式,一是内核空间一是用户空间,内核空间挂载需要内核支持 ceph 模块,用户空间挂载需要安装 ceph-fuse

2.4.7.1 客户端通过key文件挂载

root@ceph-mon2:/etc/ceph# mount -t ceph 192.168.43.101:6789,192.168.43.102:6789:/ /data2 -o name=yanyan,secretfile=/etc/ceph/yanyan.key
root@ceph-mon2:/etc/ceph# df -TH
/dev/rbd0                                 ext4       11G   15M   11G   1% /data
/dev/rbd1                                 xfs       5.4G   40M  5.4G   1% /data1
192.168.43.101:6789,192.168.43.102:6789:/ ceph       24G     0   24G   0% /data2
#验证写入数据 
root@ceph-mon2:/etc/ceph# cp /etc/issue /data2/ 
root@ceph-mon2:/etc/ceph# dd if=/dev/zero of=/data2/testfile bs=2M count=100
100+0 records in
100+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.573734 s, 366 MB/s

2.4.7.2 客户端通过key挂载

root@ceph-mon2:/data2# tail /etc/ceph/yanyan.key 
AQDGiithR8i2MBAAaz7HqOni9NxCegRvSh4XZQ==

root@ceph-mon2:/# umount /data2
root@ceph-mon2:/# mount -t ceph 192.168.43.101:6789,192.168.43.102:6789:/ /data2 -o name=yanyan,secret=AQDGiithR8i2MBAAaz7HqOni9NxCegRvSh4XZQ==
root@ceph-mon2:/# cd /data2
root@ceph-mon2:/data2# ls
issue  testfile

root@ceph-mon2:/data2# df -TH
/dev/rbd0                                 ext4       11G   15M   11G   1% /data
/dev/rbd1                                 xfs       5.4G   40M  5.4G   1% /data1
192.168.43.101:6789,192.168.43.102:6789:/ ceph       21G  6.5G   14G  32% /data2

2.4.7.3 开机挂载

root@ceph-mon2:/# cat /etc/fstab
192.168.43.101:6789,192.168.43.102:6789:/ /data2 ceph defaults,name=yanyan,secretfile=/etc/ceph/yanyan.key,_netdev 0 0
#IP是mon的ip池,一定要指定_netdev网络挂载

root@ceph-mon2:/# umount /data2
root@ceph-mon2:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0       9.8G   14M  9.4G   1% /data
/dev/rbd1       5.0G   39M  5.0G   1% /data1
root@ceph-mon2:/# mount -a
root@ceph-mon2:/# df -TH
Filesystem                                Type      Size  Used Avail Use% Mounted on
/dev/rbd0                                 ext4       11G   15M   11G   1% /data
/dev/rbd1                                 xfs       5.4G   40M  5.4G   1% /data1
192.168.43.101:6789,192.168.43.102:6789:/ ceph       21G  6.5G   14G  32% /data2

3 mds高可用

3.1 当前mds服务器状态

[ceph@ceph-deploy ceph-cluster]$ ceph mds stat 
mycephfs-1/1/1 up {0=ceph-mgr1=up:active}

3.2 添加mds服务器

将 ceph-mgr1 和 ceph-mon1 和 ceph-mon2 作为 mds 服务角色添加至 ceph 集群,最后实两主一备的 mds 高可用和高性能结构。

#mds 服务器安装 ceph-mds 服务 
[root@ceph-mon1 ~]# yum install ceph-mds -y 
[root@ceph-mon2 ~]# yum install ceph-mds -y 

#添加 mds 服务器 
magedu@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mgr1 
magedu@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mon1 
magedu@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mon2

#验证 mds 服务器当前状态: 
magedu@ceph-deploy:~/ceph-cluster$ ceph mds stat 
mycephfs:2 {0=ceph-mon2=up:active} 2 up:standby

3.3 验证ceph集群当前状态

当前处于激活状态的 mds 服务器有一台,处于备份状态的 mds 服务器有2台。

magedu@ceph-deploy:~/ceph-cluster$ ceph fs status
mycephfs - 1 clients
========
RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mon2  Reqs:    0 /s    16     15     12      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata  1148k  19.2G  
  cephfs-data      data    12.0G  19.2G  
STANDBY MDS 
 ceph-mon1
 ceph-mgr1   
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

3.4 当前的文件系统状态

magedu@ceph-deploy:~/ceph-cluster$ ceph fs get mycephfs
Filesystem 'mycephfs' (1)
fs_name	mycephfs
epoch	12
flags	12
created	2021-08-22T11:43:04.596564+0800
modified	2021-08-22T13:40:18.974219+0800
tableserver	0
root	0
session_timeout	60
session_autoclose	300
max_file_size	1099511627776
required_client_features	{}
last_failure	0
last_failure_osd_epoch	252
compat	compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds	1
in	0
up	{0=54786}
failed	
damaged	
stopped	
data_pools	[8]
metadata_pool	7
inline_data	disabled
balancer	
standby_count_wanted	1
[mds.ceph-mgr1{0:54786} state up:active seq 1868 addr [v2:192.168.43.104:6800/1237850653,v1:192.168.43.104:6801/1237850653]]

3.5 设置处于激活状态mds的数量

目前有3个 mds 服务器,但是有一个主2个备,可以优化一下部署架构,设置为为两主1备

magedu@ceph-deploy:~/ceph-cluster$ ceph fs set mycephfs max_mds 2
magedu@ceph-deploy:~/ceph-cluster$ ceph fs status #设置同时活跃的主 mds 最 大值为 2。
mycephfs - 0 clients
========
RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr1  Reqs:    0 /s    14     13     12      0   
 1    active  ceph-mon1  Reqs:    0 /s    10     13     11      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   163k  32.8G  
  cephfs-data      data       0   21.9G  
STANDBY MDS  
 ceph-mon2   
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

3.6 mds高可用优化

可以通过配置为active指定standby

[ceph@ceph-deploy ceph-cluster]$ vim ceph.conf 
[global] 
fsid = 23b0f9f2-8db3-477f-99a7-35a90eaf3dab 
public_network = 172.31.0.0/21 
cluster_network = 192.168.0.0/21 
mon_initial_members = ceph-mon1 
mon_host = 172.31.6.104 
auth_cluster_required = cephx 
auth_service_required = cephx 
auth_client_required = cephx 

mon clock drift allowed = 2 
mon clock drift warn backoff = 30 

[mds.ceph-mgr2] 
#mds_standby_for_fscid = mycephfs 
mds_standby_for_name = ceph-mgr1 
mds_standby_replay = true 

[mds.ceph-mon3] 
mds_standby_for_name = ceph-mon2 
mds_standby_replay = true

3.7 分发配置文件并重启mds服务

#分发配置文件保证各 mds 服务重启有效 
$ ceph-deploy --overwrite-conf config push ceph-mon1 
$ ceph-deploy --overwrite-conf config push ceph-mon2 
$ ceph-deploy --overwrite-conf config push ceph-mgr1 
 
[root@ceph-mon1 ~]# systemctl restart ceph-mds@ceph-mon1.service 
[root@ceph-mon2 ~]# systemctl restart ceph-mds@ceph-mon2.service 
[root@ceph-mgr1 ~]# systemctl restart ceph-mds@ceph-mgr1.service 

4 ceph rgw的使用

ceph 使用 bucket 作为存储桶(存储空间),实现对象数据的存储和多用户隔离,数据存储在bucket 中,用户的权限也是针对 bucket 进行授权,可以设置用户对不同的 bucket 拥有不通的权限,以实现权限管理。

bucket 特性:

  • 存储空间是您用于存储对象(Object)的容器,所有的对象都必须隶属于某个存储空间,可 以设置和修改存储空间属性用来控制地域、访问权限、生命周期等,这些属性设置直接作用于该存储空间内所有对象,因此您可以通过灵活创建不同的存储空间来完成不同的管理功能。
  • 同一个存储空间的内部是扁平的,没有文件系统的目录等概念,所有的对象都直接隶属于其对应的存储空间。
  • 每个用户可以拥有多个存储空间
  • 存储空间的名称在 OSS 范围内必须是全局唯一的,一旦创建之后无法修改名称。
  • 存储空间内部的对象数目没有限制。

4.1 部署radosgw服务

apt install radosgw
ceph-deploy节点执行 
ceph-deploy rgw create ceph-mgr1
magedu@ceph-deploy:~/ceph-cluster$ sudo curl http://192.168.43.104:7480/  #mgr1的ip地址+7480

<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

4.2 创建用户并授权

magedu@ceph-deploy:~/ceph-cluster$ radosgw-admin  user create --uid="user1" --display-name="user1"

{
    "user_id": "user1",
    "display_name": "user1",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "user1",
            "access_key": "3PDRWUWJ8ML5G4CQ0XXK",
            "secret_key": "ZSm45j0Sq9AjqBSPjfFpQbwHdN4PUl3nuQnAnAkE"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

从上面的输出可以看到access_key,secret_key,同时也能够看到关于bucket,user配额相关的内容

  • 1、radosgw-admin user modify 修改用户信息;
  • 2、radosgw-admin user rm 删除用户;
  • 3、radosgw-admin user enable,radosgw-admin user suspend 启用和禁用用户。

此时用户已经创建完毕,我们可以配置 s3cmd 访问集群了,访问集群的时候需要用到RGW的访问域名。如果在企业中最好设置DNS解析,当前为了测试直接写hosts文件的方式实现:

注:当前集群有多个radosgw,指向任意一个均可以,生产环境应该指向radosgw的VIP地址

安装s3cmd 工具

root@ceph-mon2:/# apt install s3cmd -y
# 查看访问用户
root@ceph-mon2:/# radosgw-admin user info --uid user1
{
    "user_id": "user1",
    "display_name": "user1",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "user1",
            "access_key": "3PDRWUWJ8ML5G4CQ0XXK",
            "secret_key": "ZSm45j0Sq9AjqBSPjfFpQbwHdN4PUl3nuQnAnAkE"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

# 配置s3cmd
root@ceph-mon2:/# echo "192.168.43.104 rgw.zOukun.com" >> /etc/hosts
s3cmd --configure #指定以下内容
Access Key: 3PDRWUWJ8ML5G4CQ0XXK
Secret Key: ZSm45j0Sq9AjqBSPjfFpQbwHdN4PUl3nuQnAnAkE
S3 Endpoint [s3.amazonaws.com]: 192.168.43.104:7480
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 192.168.43.104:7480/%(bucket)s
Use HTTPS protocol [Yes]: False
Test access with supplied credentials? [Y/n] y
Save settings? [y/N] y

创建bucket

root@ceph-mon2:/# s3cmd mb s3://z0ukun-rgw-bucket
ERROR: S3 error: 403 (SignatureDoesNotMatch)
这是需要修改版本,启用v2版本即可
root@ceph-mon2:/# sed -i '/signature_v2/s/False/True/g' root/.s3cfg
root@ceph-mon2:/# s3cmd mb s3://z0ukun-rgw-bucket
Bucket 's3://z0ukun-rgw-bucket/' created
root@ceph-mon2:/# s3cmd ls
2021-08-29 15:17  s3://z0ukun-rgw-bucket

上传数据

# 上传文件
s3cmd put /etc/fstab s3://z0ukun-rgw-bucket/fstab

# 查看文件详情
s3cmd ls s3://z0ukun-rgw-bucket
s3cmd info s3://z0ukun-rgw-bucket

# 下载文件
s3cmd get s3://z0ukun-rgw-bucket/fstab test-fstab

root@ceph-mon2:~# s3cmd get s3://z0ukun-rgw-bucket/fstab test-fstab
download: 's3://z0ukun-rgw-bucket/fstab' -> 'test-fstab'  [1 of 1]
 669 of 669   100% in    0s   159.27 kB/s  done
root@ceph-mon2:~# ls
test-fstab
root@ceph-mon2:~# cat test-fstab 
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda1 during installation
UUID=d1949cc7-daf3-4b9c-8472-24d3041740b2 /               ext4    errors=remount-ro 0       1
/swapfile                                 none            swap    sw              0       0
192.168.43.101:6789,192.168.43.102:6789:/ /data2 ceph defaults,name=yanyan,secretfile=/etc/ceph/yanyan.key,_netdev 0 0

除了这几个常见的基本功能之外,s3cmd还提供了sync,cp,mv,setpolicy,multipart等功能,我们可以通过s3cmd –help获取更多的命令帮助:

5 ceph dashboard和监控

新版本需要安装 dashboard ,必须安装在 mgr 节点

root@ceph-mgr1:~# ceph mgr module enable dashboard #启用模块
root@ceph-mgr1:~# apt-cache madison ceph-mgr-dashboard 
ceph-mgr-dashboard | 16.2.5-1bionic | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 Packages
root@ceph-mgr1:~# apt install ceph-mgr-dashboard
Reading package lists... Done
Building dependency tree       
Reading state information... Done
ceph-mgr-dashboard is already the newest version (16.2.5-1bionic).
0 upgraded, 0 newly installed, 0 to remove and 19 not upgraded.

root@ceph-mgr1:~# ceph mgr module ls
{
    "always_on_modules": [
        "balancer",
        "crash",
        "devicehealth",
        "orchestrator",
        "pg_autoscaler",
        "progress",
        "rbd_support",
        "status",
        "telemetry",
        "volumes"
    ],
    "enabled_modules": [
        "iostat",
        "nfs",
        "restful"
    ],
    "disabled_modules": [
        {
            "name": "alerts",
            "can_run": true,
            "error_string": "",
            "module_options": {
                "interval": {
                    "name": "interval",
                    "type": "secs",
                    "level": "advanced",
                    "flags": 1,
                    "default_value": "60",
                    "min": "",
                    "max": "",
                    .................
[ceph@ceph-deploy ceph-cluster]$ ceph mgr module enable dashboard #启用模块                    
 注:模块启用后还不能直接访问,需要配置关闭 SSL 或启用 SSL 及指定监听地址。               

5.1 enable dashborad 模块

Ceph dashboard 在 mgr 节点进行开启设置,并且可以配置开启或者关闭 SSL,如下:

root@ceph-mgr1:~# ceph config set mgr mgr/dashboard/ssl false #关闭 SSL

root@ceph-mgr1:~# ceph config set mgr mgr/dashboard/ceph-mgr1/server_addr 192.168.43.104 #指定 dashboard 监听地址

root@ceph-mgr1:~# ceph config set mgr mgr/dashboard/ceph-mgr1/server_port 9009 #指定 dashboard 监听端口

#验证 ceph 集群状态: 
[ceph@ceph-deploy ceph-cluster]$ ceph -s 
cluster: 
    id: 23b0f9f2-8db3-477f-99a7-35a90eaf3dab 
    health: HEALTH_OK 

services: 
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 
    mgr: ceph-mgr1(active), standbys: ceph-mgr2 
    mds: mycephfs-2/2/2 up {0=ceph-mgr1=up:active,1=ceph-mgr2=up:active}, 1 up:standby 
    osd: 12 osds: 12 up, 12 in 
    rgw: 2 daemons active 
    
data: 
    pools: 9 pools, 256 pgs 
    objects: 411 objects, 449 MiB 
    usage: 15 GiB used, 1.2 TiB / 1.2 TiB avail 
    pgs: 256 active+clean 

io:
	client: 8.0 KiB/s rd, 0 B/s wr, 7 op/s rd, 5 op/s wr 
	
	
第一次启用 dashboard 插件需要等一段时间(几分钟),再去被启用的节点验证。 如果有以下报错: Module 'dashboard' has failed: error('No socket could be created',) 需要检查 mgr 服务是否正常运行,可以重启一遍 mgr 服务

5.2 在mgr节点验证端口与进程

[root@ceph-mgr1 ~]# lsof -i:9009 
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ceph-mgr 2338 ceph 28u IPv4 23986 0t0 TCP *:pichat (LISTEN)

5.3 dashboard访问验证

http://192.168.43.104:9009/#/login

5.4 设置dashboard账户密码

magedu@ceph-deploy:~/ceph-cluster$ touch pass.txt
magedu@ceph-deploy:~/ceph-cluster$ echo "12345678" > pass.txt 
magedu@ceph-deploy:~/ceph-cluster$ ceph dashboard set-login-credentials jack -i pass.txt
如果当你发现自己的才华撑不起野心时,那就请你安静下来学习
原文地址:https://www.cnblogs.com/haozheyu/p/15204110.html