操作系统损坏重装是否能恢复lvm硬盘数据的相关实验

1 目标

  • 如果宿主机操作故障系统或其他原因,在其上已经挂载使用的相关LVM信息和数据能否恢复?
  • 重开一台主机,挂载已构建LVM的硬盘组,能否保证原有数据的安全完整?

2 实验环境

2.1 宿主机

  • 名称 lvm0
  • 系统盘8G
  • 操作系统 Debian 10

2.2 副主机

  • 名称 lvm1
  • 系统盘8G
  • 操作系统 Ubuntu 20.04

2.3 存储硬盘

  • 容量10 G
  • 数据 3 块

3 过程

3.1 宿主机设置 LVM

设定 lvm

分区设置标签为 8e 即 lvm
root@lvm0:~# fdisk /dev/sdb
root@lvm0:~# fdisk /dev/sdc
root@lvm0:~# fdisk /dev/sdd

创建物理卷
root@lvm0:~# pvcreate /dev/sd[bdc]1

创建卷组
root@lvm0:~# vgcreate vg0 /dev/sd[bdc]1

创建逻辑卷
root@lvm0:~# lv --size 15G --name lv0 vg0

放入测试数据

root@lvm0:~# mnt /dev/vg0/lv0 /mnt
root@lvm0:~# git clone best.git

root@lvm0:~# ls /mnt
best  lost+found

root@lvm0:~# df -h
Filesystem           Size  Used Avail Use% Mounted on
udev                 478M     0  478M   0% /dev
tmpfs                 99M  3.0M   96M   4% /run
/dev/sda1            6.9G  1.5G  5.1G  23% /
tmpfs                494M     0  494M   0% /dev/shm
tmpfs                5.0M     0  5.0M   0% /run/lock
tmpfs                494M     0  494M   0% /sys/fs/cgroup
tmpfs                 99M     0   99M   0% /run/user/0
/dev/mapper/vg0-lv0   15G  168M   14G   2% /mnt

然后我们关闭宿主机,假定它已经挂掉!

root@lvm0:~# systemctl poweroff

3.2 副机挂载 LVM

我们新建一个虚拟机,以 ubuntu 20.04 为模板。在虚拟机的设置中加上之前的 3 块硬盘。

然后,开机。

见证奇迹的时刻到了!

root@lvm1:~# fdisk -l
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VBOX HARDDISK   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 1B9BD938-DE74-4DA8-89D4-BD4F40B6132B

Device       Start      End  Sectors Size Type
/dev/sda1     2048     4095     2048   1M BIOS boot
/dev/sda2     4096  2101247  2097152   1G Linux filesystem
/dev/sda3  2101248 20969471 18868224   9G Linux filesystem


Disk /dev/sdb: 100 GiB, 107374182400 bytes, 209715200 sectors
Disk model: VBOX HARDDISK   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb2891594

Device     Boot Start       End   Sectors  Size Id Type
/dev/sdb1          63 209715199 209715137  100G 8e Linux LVM


Disk /dev/sdc: 50 GiB, 53687091200 bytes, 104857600 sectors
Disk model: VBOX HARDDISK   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdd: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VBOX HARDDISK   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xc85f08b0

Device     Boot Start      End  Sectors Size Id Type
/dev/sdd1        2048 20971519 20969472  10G 8e Linux LVM


Disk /dev/sde: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VBOX HARDDISK   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8c4dd50f

Device     Boot Start      End  Sectors Size Id Type
/dev/sde1        2048 20971519 20969472  10G 8e Linux LVM


Disk /dev/sdf: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VBOX HARDDISK   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x1fd1315f

Device     Boot Start      End  Sectors Size Id Type
/dev/sdf1        2048 20971519 20969472  10G 8e Linux LVM


Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 64 GiB, 68719476736 bytes, 134217728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg0-lv0: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

注意观察上表,sdd, sde, sdf 以及 vg0, lv0 都来了!

我们用用看

root@lvm1:~# mount /dev/vg0/lv0 /mnt
root@lvm1:~# ls -alh /mnt
total 32K
drwxr-xr-x  5 root root 4.0K Nov  8 02:30 .
drwxr-xr-x 23 root root 4.0K Sep 14 09:20 ..
drwxr-xr-x  6 root root 4.0K Nov  8 02:20 best
drwx------  2 root root  16K Nov  8 02:18 lost+found

写点数据进去
root@lvm1:~# git clone secdoc.git
root@lvm1:~# ls -alh /mnt
total 32K
drwxr-xr-x  5 root root 4.0K Nov  8 02:30 .
drwxr-xr-x 23 root root 4.0K Sep 14 09:20 ..
drwxr-xr-x  6 root root 4.0K Nov  8 02:20 best
drwx------  2 root root  16K Nov  8 02:18 lost+found
drwxr-xr-x  5 root root 4.0K Nov  8 02:30 secdoc

关闭副主机,打开原宿主机,新增的数据赫然在列!

4 结论

  • lvm 信息和数据都是写入硬盘的,和主机无关
  • 主机对 lvm 只是引用的关系
  • 放心大胆地对数据进行 lvm 吧
  • 我们可以这样想像,生产环境中的硬盘,是来自光纤存储的
原文地址:https://www.cnblogs.com/mouseleo/p/13943741.html