cephfs目录元数据解析

cephfs目录下的每一个目录在元数据存储池中都有一个对应的对象,目录的扩展属性以及存放在它下面的子目录和文件的元数据信息都是存放在这个对象的omap属性中的,通过导出这个omap文件可以将目录的扩展属性解析出来。

首先导出目录testdir1的元数据,这个目录在cephfs的根目录下面,所以导出的时候要在根目录对应的对象1.00000000上去找。

[root@node101 10000000000.00000000]# rados -p metadata getomapval 1.00000000 testdir1_head testdir1_head
Writing to testdir1_head
[root@node101 10000000000.00000000]# hexdump -C testdir1_head
00000000  02 00 00 00 00 00 00 00  49 10 06 bf 01 00 00 00  |........I.......|
00000010  00 00 00 00 01 00 00 00  00 00 00 ad 9a a9 61 70  |..............ap|
00000020  42 61 00 ed 41 00 00 00  00 00 00 00 00 00 00 01  |Ba..A...........|
00000030  00 00 00 00 02 00 00 00  00 00 00 00 02 02 18 00  |................|
00000040  00 00 00 00 00 00 00 00  00 00 00 00 00 00 ff ff  |................|
00000050  ff ff ff ff ff ff 00 00  00 00 00 00 00 00 00 00  |................|
00000060  00 00 01 00 00 00 ff ff  ff ff ff ff ff ff 00 00  |................|
00000070  00 00 00 00 00 00 00 00  00 00 ad 9a a9 61 70 42  |.............apB|
00000080  61 00 f6 7e a8 61 c8 90  1e 29 00 00 00 00 00 00  |a..~.a...)......|
00000090  00 00 03 02 28 00 00 00  00 00 00 00 00 00 00 00  |....(...........|
000000a0  ad 9a a9 61 70 42 61 00  04 00 00 00 00 00 00 00  |...apBa.........|
000000b0  00 00 00 00 00 00 00 00  04 00 00 00 00 00 00 00  |................|
000000c0  03 02 38 00 00 00 00 00  00 00 00 00 00 00 18 00  |..8.............|
000000d0  00 00 00 00 00 00 04 00  00 00 00 00 00 00 01 00  |................|
000000e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000f0  00 00 00 00 00 00 ad 9a  a9 61 70 42 61 00 03 02  |.........apBa...|
00000100  38 00 00 00 00 00 00 00  00 00 00 00 18 00 00 00  |8...............|
00000110  00 00 00 00 04 00 00 00  00 00 00 00 01 00 00 00  |................|
00000120  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000130  00 00 00 00 ad 9a a9 61  70 42 61 00 10 00 00 00  |.......apBa.....|
00000140  00 00 00 00 00 00 00 00  00 00 00 00 01 00 00 00  |................|
00000150  00 00 00 00 02 00 00 00  00 00 00 00 00 00 00 00  |................|
00000160  00 00 00 00 00 00 00 00  ff ff ff ff ff ff ff ff  |................|
00000170  00 00 00 00 01 01 10 00  00 00 00 00 00 00 00 00  |................|
00000180  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000190  00 00 00 00 00 00 00 00  00 00 00 00 00 00 f6 7e  |...............~|
000001a0  a8 61 c8 90 1e 29 04 00  00 00 00 00 00 00 ff ff  |.a...)..........|
000001b0  ff ff 01 01 16 00 00 00  00 00 00 00 00 00 00 00  |................|
000001c0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000001d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 fe ff  |................|
000001e0  ff ff ff ff ff ff 00 00  00 00                    |..........|
000001ea

截取目录大小元数据片段(大小为62字节)。

[root@node101 10000000000.00000000]# dd if=testdir1_head of=testdir1_head_1 bs=1 skip=192 count=62
62+0 records in
62+0 records out
62 bytes (62 B) copied, 0.000924797 s, 67.0 kB/s
[root@node101 10000000000.00000000]# hexdump -C testdir1_head_1
00000000  03 02 38 00 00 00 00 00  00 00 00 00 00 00 18 00  |..8.............|
00000010  00 00 00 00 00 00 04 00  00 00 00 00 00 00 01 00  |................|
00000020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000030  00 00 00 00 00 00 ad 9a  a9 61 70 42 61 00        |.........apBa.|
0000003e

使用ceph-dencoder工具查看nest_info_t类别的解析结果。

rbytes表示目录大小,rfiles表示目录下的文件数量,rsubdirs表示目录下的子目录数量,rsnaprealms表示目录快照数量,rctime表示目录最后更新时间。

[root@node101 10000000000.00000000]# ceph-dencoder type 'nest_info_t' import testdir1_head_1 decode dump_json
{
    "version": 0,
    "rbytes": 24,
    "rfiles": 4,
    "rsubdirs": 1,
    "rsnaprealms": 0,
    "rctime": "2021-12-03 12:18:53.006374"
}

这个对应的是,其中ceph.dir.rctime使用的时间戳是以秒为单位计算的:

[root@node101 testdir1]# getfattr -dm . /vcluster/cephfs/testdir1/
getfattr: Removing leading '/' from absolute path names

# file: vcluster/cephfs/testdir1/

ceph.dir.entries="4"
ceph.dir.files="4"
ceph.dir.rbytes="1048600"
ceph.dir.rctime="1638505133.09133377000"
ceph.dir.rentries="5"
ceph.dir.rfiles="4"
ceph.dir.rsubdirs="1"
ceph.dir.subdirs="0"

查看目录中的文件数量和大小,与属性描述一致。

[root@node101 10000000000.00000000]# cd -
/vcluster/cephfs/testdir1
[root@node101 testdir1]# ll
total 1026
-rw-r--r-- 1 root root 1048576 Dec  3 12:18 1mfile
-rw-r--r-- 1 root root       8 Dec  2 16:18 file1
-rw-r--r-- 1 root root       8 Dec  2 16:28 file2
-rw-r--r-- 1 root root       8 Dec  2 18:03 file3
原文地址:https://www.cnblogs.com/xzy186/p/15672391.html