LVM逻辑卷管理器

LVM概述


  通过使用Linux的逻辑卷管理器(Logical Volume Manager, LVM),用户可以在系统运行时动态调整文件系统的大小,把数据从一块硬盘重定位到另一块硬盘,也可以提高I/O操作的性能,以及提供冗余保护,它的快照功能允许用户对逻辑卷进行实时的备份。

  对一般用户来讲,使用最多的是动态调整文件系统大小的功能。这样,你在分区时就不必为如何设置分区的大小而烦恼,只要在硬盘中预留出部分空闲空间,然后根据系统的使用情况,动态调整分区大小


  

LVM相关概念


  1.物理卷,PV(PhysicalVolume) 

    物理卷可以是一整块磁盘或者是一个分区,他为LVM提供存储介质,PV与普通分区的差异是system ID为8e.

  2.物理卷组,VG(VolumeGroup)

    VG是由多个PV组成.

  3.物理扩展块,PE(PhysicalExtend)

    LVM默认使用4MB的PE块,它是LVM中最小的存储单位.LVM的VG最多有65534个PE,因此默认LVM最大256G.可以通过调整PE大小来改变VG的最大容量,PE概念与文件系统block类似.

  4.逻辑卷,LV(LogicalVolume)

    最终的VG会被切成LV, LV最终可以被格式化成类似分区的存储.LV的大小需要时PE的整数倍.

    

LVM实现流程

  通过规划PV,VG,LV之后,就可以利用mkfs格式化工具把LV变成可使用的文件系统,并且该文件系统可以在将来进行扩充或者缩减.

  

LVM使用的简单演示


  1.使用fdisk工具创建4个分区大小均为1G,systemID为8e,分别为/dev/vdb5,/dev/vdb6,/dev/vdb7,/dev/vdb8

Disk /dev/vdb: 32.2 GB, 32212254720 bytes
16 heads, 63 sectors/track, 62415 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x5d9b384e

   Device Boot      Start         End      Blocks   Id  System
/dev/vdb1               1       41611    20971912+  83  Linux
/dev/vdb2           41612       62415    10485216    5  Extended
/dev/vdb5           41612       43693     1049296+  8e  Linux LVM
/dev/vdb6           43694       45775     1049296+  8e  Linux LVM
/dev/vdb7           45776       47857     1049296+  8e  Linux LVM
/dev/vdb8           47858       49939     1049296+  8e  Linux LVM

  2.使用pvcreate创建pv

  几个跟pv相关的命令

  • pvcreate: 将物理分区创建成pv
  • pvscan: 查看pv
  • pvdisplay: 查看所有pv状态
  • pvmove: pv属性删除
[root@zwj ~]# pvcreate /dev/vdb{5..8}
  Physical volume "/dev/vdb5" successfully created
  Physical volume "/dev/vdb6" successfully created
  Physical volume "/dev/vdb7" successfully created
  Physical volume "/dev/vdb8" successfully created
[root@zwj ~]# pvscan
  PV /dev/vdb5                      lvm2 [1.00 GiB]
  PV /dev/vdb6                      lvm2 [1.00 GiB]
  PV /dev/vdb7                      lvm2 [1.00 GiB]
  PV /dev/vdb8                      lvm2 [1.00 GiB]
  Total: 4 [4.00 GiB] / in use: 0 [0   ] / in no VG: 4 [4.00 GiB]
[root@zwj ~]#
[root@zwj ~]# pvdisplay
  "/dev/vdb5" is a new physical volume of "1.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/vdb5
  VG Name
  PV Size               1.00 GiB
  Allocatable           NO
  PE Size               0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               YAtMpF-klHI-ZCUk-cZlw-Ff7C-A5KK-1zlIly

  "/dev/vdb6" is a new physical volume of "1.00 GiB"
此处省略。。。/dev/vdb6,7,8

  3.创建VG

  vg相关命令

  • vgcreate: 创建vg
  • vgscan: 查看系统上的vg
  • vgdisplay: 显示vg状态
  • vgextend: 在vg内增加pv
  • vgreduce: 在vg内删除pv
  • vgchange: 设置vg是否启动
  • vgremove: 删除vg

  

[root@zwj ~]# vgscan                          #创建vg前查看是否有vg
  Reading all physical volumes.  This may take a while...
[root@zwj ~]# vgcreate -s 16M mytestvg /dev/vdb{5,6,7}      #先把pv /dev/vdb{5,6,7}加入vg      -s参数用来设置PE大小,默认4M
  Volume group "mytestvg" successfully created
[root@zwj ~]# vgscan                           #再次查看vg是否存在,
  Reading all physical volumes.  This may take a while...     
  Found volume group "mytestvg" using metadata type lvm2      #我们创建的名为mytestvg的vg
[root@zwj ~]# vgdisplay                          #查看vg状态
  --- Volume group ---
  VG Name               mytestvg
  System ID
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               2.95 GiB
  PE Size               16.00 MiB
  Total PE              189
  Alloc PE / Size       0 / 0
  Free  PE / Size       189 / 2.95 GiB
  VG UUID               Pf6hlf-90cR-qU49-lqGX-0eW8-TIkg-Anw6M4

vg太小,把最后一个pv,/dev/vdb8加进去

[root@zwj ~]# vgextend mytestvg /dev/vdb8
  Volume group "mytestvg" successfully extended
[root@zwj ~]# vgdisplay  mytestvg
  --- Volume group ---
  VG Name               mytestvg
  System ID
  Format                lvm2
  Metadata Areas        4
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                4
  Act PV                4
  VG Size               3.94 GiB
  PE Size               16.00 MiB
  Total PE              252
  Alloc PE / Size       0 / 0
  Free  PE / Size       252 / 3.94 GiB
  VG UUID               Pf6hlf-90cR-qU49-lqGX-0eW8-TIkg-Anw6M4

  4.创建LV

  与lv相关命令

  • lvcreate: 创建LV
  • lvscan: 查看系统上的LV
  • lvdisplay: 查看LV状态
  • lvextend: 增加LV容量
  • lvreduce: 缩小LV容量
  • lvremove: 删除LV
  • lvresize: 对LV大小进行调整
[root@zwj ~]# lvcreate -l 252 -n mytestlv mytestvg          #-l 后面接要分配给LV的个数,这里把全部252个PE都给了mytestlv,-n 接lv名字
  Logical volume "mytestlv" created.
[root@zwj ~]# lvscan
  ACTIVE            '/dev/mytestvg/mytestlv' [3.94 GiB] inherit  #lv信息
[root@zwj ~]# lvdisplay                          #lv状态
  --- Logical volume ---
  LV Path                /dev/mytestvg/mytestlv            #逻辑卷路径,也是全名,必须写全,不可以只写mytestlv
  LV Name                mytestlv
  VG Name                mytestvg
  LV UUID                KiWyMs-Ia9T-O06E-dWRb-wcQ6-P1Ph-kYN0HK
  LV Write Access        read/write
  LV Creation host, time zwj, 2017-05-06 18:10:11 +0800
  LV Status              available
  # open                 0
  LV Size                3.94 GiB
  Current LE             252
  Segments               4
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

[root@zwj ~]#
[root@zwj ~]# ls -l /dev/mytestvg/mytestlv
lrwxrwxrwx 1 root root 7 May  6 18:10 /dev/mytestvg/mytestlv -> ../dm-1

 5.格式化为文件系统

[root@zwj ~]# mkfs.ext3  /dev/mytestvg/mytestlv
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
258048 inodes, 1032192 blocks
51609 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1056964608
32 block groups
32768 blocks per group, 32768 fragments per group
8064 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@zwj ~]# mkdir /mnt/lvm
[root@zwj ~]# mount /dev/mytestvg/mytestlv /mnt/lvm/
[root@zwj ~]# mount
/dev/vda1 on / type ext3 (rw,noatime,acl,user_xattr)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/vdb1 on /mydata type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/mapper/mytestvg-mytestlv on /mnt/lvm type ext3 (rw)
[root@zwj ~]#

6.扩展LV容量

  • 使用fdisk创建新的SystemID=8e的分区/dev/vdb9
  • 使用pvcreate创建新的PV
  • 使用vgextend加入新的PV
  • 使用lvresize京vg新加入的PE加入LV
  • 使用resize2fs增加文件系统容量
[root@zwj ~]# pvscan
  PV /dev/vdb5   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb6   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb7   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb8   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  Total: 4 [3.94 GiB] / in use: 4 [3.94 GiB] / in no VG: 0 [0   ]
[root@zwj ~]# pvcreate /dev/vdb9
  Physical volume "/dev/vdb9" successfully created
[root@zwj ~]# pvscan
  PV /dev/vdb5   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb6   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb7   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb8   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb9                      lvm2 [1.00 GiB]
  Total: 5 [4.94 GiB] / in use: 4 [3.94 GiB] / in no VG: 1 [1.00 GiB]
[root@zwj ~]# vgextend mytestvg /dev/vdb9
  Volume group "mytestvg" successfully extended
[root@zwj ~]# vgdisplay mytestvg
  --- Volume group ---
  VG Name               mytestvg
  System ID
  Format                lvm2
  Metadata Areas        5
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                5
  Act PV                5
  VG Size               4.92 GiB
  PE Size               16.00 MiB
  Total PE              315
  Alloc PE / Size       252 / 3.94 GiB
  Free  PE / Size       63 / 1008.00 MiB
  VG UUID               Pf6hlf-90cR-qU49-lqGX-0eW8-TIkg-Anw6M4
[root@zwj ~]# lvresize -l +63 /dev/mytestvg/mytestlv
  Size of logical volume mytestvg/mytestlv changed from 3.94 GiB (252 extents) to 4.92 GiB (315 extents).
  Logical volume mytestlv successfully resized.
[root@zwj ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/mytestvg/mytestlv
  LV Name                mytestlv
  VG Name                mytestvg
  LV UUID                KiWyMs-Ia9T-O06E-dWRb-wcQ6-P1Ph-kYN0HK
  LV Write Access        read/write
  LV Creation host, time zwj, 2017-05-06 18:10:11 +0800
  LV Status              available
  # open                 0
  LV Size                4.92 GiB
  Current LE             315
  Segments               5
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
[root@zwj ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda1              20G   13G  5.9G  69% /
/dev/vdb1              20G  936M   18G   5% /mydata
/dev/mapper/mytestvg-mytestlv
                      3.9G   80M  3.7G   3% /mnt/lvm
[root@zwj ~]# resize2fs /dev/mytestvg/mytestlv
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mytestvg/mytestlv is mounted on /mnt/lvm; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/mytestvg/mytestlv to 1290240 (4k) blocks.
The filesystem on /dev/mytestvg/mytestlv is now 1290240 blocks long.
[root@zwj ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda1              20G   13G  5.9G  69% /
/dev/vdb1              20G  936M   18G   5% /mydata
/dev/mapper/mytestvg-mytestlv
                      4.9G   80M  4.6G   2% /mnt/lvm          #扩展成功
[root@zwj ~]# ls -l /mnt/lvm/
total 20
drwx------ 2 root   root   16384 May  6 18:15 lost+found
drwxr-xr-x 4 weelin weelin  4096 May  6 18:18 test

7.缩小LV容量(假设要把/dev/vdb5解放出来)

  • 使用resize2fs把文件系统大小设定为去除/dev/vdb5后的容量
  • 使用lvresize从LV中减去对应PE数目
  • 使用vgreduce把/dev/vdb5移出mytestvg, (前提要视情况使用pvmove把/dev/vdb5中的PE移到空闲的PV)
  • 使用pvremove去掉/dev/vdb5的PV属性

  

[root@zwj ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda1              20G   13G  5.9G  69% /
/dev/vdb1              20G  936M   18G   5% /mydata
/dev/mapper/mytestvg-mytestlv
                      4.9G   80M  4.6G   2% /mnt/lvm
[root@zwj ~]# resize2fs /dev/mytestvg/mytestlv 4096M        #缩小文件系统
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mytestvg/mytestlv is mounted on /mnt/lvm; on-line resizing required
On-line shrinking from 1290240 to 1048576 not supported.
[root@zwj ~]# umount /dev/mytestvg/mytestlv             #需要离线搞
[root@zwj ~]# resize2fs /dev/mytestvg/mytestlv 4096M
resize2fs 1.41.12 (17-May-2010)
Please run 'e2fsck -f /dev/mytestvg/mytestlv' first.
[root@zwj ~]# e2fsck -f /dev/mytestvg/mytestlv            #检查卷
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mytestvg/mytestlv: 187/322560 files (0.0% non-contiguous), 40486/1290240 blocks
[root@zwj ~]# resize2fs /dev/mytestvg/mytestlv 4096M      #缩小文件系统
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/mytestvg/mytestlv to 1048576 (4k) blocks.
The filesystem on /dev/mytestvg/mytestlv is now 1048576 blocks long.
[root@zwj ~]# lvresize -l -64 /dev/mytestvg/mytestlv      #缩小LV,通过减少PE方式
  WARNING: Reducing active logical volume to 3.92 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce mytestvg/mytestlv? [y/n]: y
  Size of logical volume mytestvg/mytestlv changed from 4.92 GiB (315 extents) to 3.92 GiB (251 extents).
  Logical volume mytestlv successfully resized.
[root@zwj ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/mytestvg/mytestlv
  LV Name                mytestlv
  VG Name                mytestvg
  LV UUID                KiWyMs-Ia9T-O06E-dWRb-wcQ6-P1Ph-kYN0HK
  LV Write Access        read/write
  LV Creation host, time zwj, 2017-05-06 18:10:11 +0800
  LV Status              available
  # open                 0
  LV Size                3.92 GiB                  #已经少了1G
  Current LE             251
  Segments               4
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
[root@zwj ~]# pvmove /dev/vdb5 /dev/vdb9          #这里很关键,虽然lv通过移除PE缩小了,但是/dev/vdb5的PE依然被数据占用,所以要把/dev/vdb5的PE移到空闲的PE中
   /dev/vdb5: Moved: 1.6%
  /dev/vdb5: Moved: 100.0%
[root@zwj ~]# pvscan                        #/dev/vdb5的PE已经没有在使用中
  PV /dev/vdb5   VG mytestvg        lvm2 [1008.00 MiB / 1008.00 MiB free]
  PV /dev/vdb6   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb7   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb8   VG mytestvg        lvm2 [1008.00 MiB / 16.00 MiB free]
  PV /dev/vdb9   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  Total: 5 [4.92 GiB] / in use: 5 [4.92 GiB] / in no VG: 0 [0   ]
[root@zwj ~]# vgreduce mytestvg /dev/vdb5             #从vg中移除PV
  Removed "/dev/vdb5" from volume group "mytestvg"
[root@zwj ~]# pvscan
  PV /dev/vdb6   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb7   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb8   VG mytestvg        lvm2 [1008.00 MiB / 16.00 MiB free]
  PV /dev/vdb9   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb5                      lvm2 [1.00 GiB]
  Total: 5 [4.94 GiB] / in use: 4 [3.94 GiB] / in no VG: 1 [1.00 GiB]
[root@zwj ~]#
[root@zwj ~]# pvremove /dev/vdb5                 #把/dev/vdb5从PV家族开除
  Labels on physical volume "/dev/vdb5" successfully wiped
[root@zwj ~]# pvscan
  PV /dev/vdb6   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb7   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb8   VG mytestvg        lvm2 [1008.00 MiB / 16.00 MiB free]
  PV /dev/vdb9   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  Total: 4 [3.94 GiB] / in use: 4 [3.94 GiB] / in no VG: 0 [0   ]
[root@zwj ~]# mount /dev/mytestvg/mytestlv /mnt/lvm/       #重新挂载
[root@zwj ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda1              20G   13G  5.9G  69% /
/dev/vdb1              20G  936M   18G   5% /mydata
/dev/mapper/mytestvg-mytestlv
                      4.0G   80M  3.7G   3% /mnt/lvm      #容量缩小成功
[root@zwj ~]# ls -l /mnt/lvm/
total 20
drwx------ 2 root   root   16384 May  6 18:15 lost+found
drwxr-xr-x 4 weelin weelin  4096 May  6 18:18 test

 LVM系统快照



   LVM有一个非常使用和重要的功能就是系统快照,所谓快照就是某一时刻系统的信息,快照可以用来恢复系统最初的状态.另外系统快照并不是把某一时刻的系统信息复制了一份,而只是把被改变的内容存到了快照区,未改变部分作为公用,所以并不会浪费太多的存储空间(这里非常类似来自相同docker镜像的容器共用一个image).

想要使用LVM快照功能必须要创建快照区,用来存储被改变的部分.大小可以不用太大,但是要根据实际情况.

[root@zwj lvm]# pvcreate /dev/vdb5                      #搞一个PV
  Physical volume "/dev/vdb5" successfully created
[root@zwj lvm]# pvscan
  PV /dev/vdb6   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb7   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb8   VG mytestvg        lvm2 [1008.00 MiB / 16.00 MiB free]
  PV /dev/vdb9   VG mytestvg        lvm2 [1008.00 MiB / 0    free]
  PV /dev/vdb5                      lvm2 [1.00 GiB]
  Total: 5 [4.94 GiB] / in use: 4 [3.94 GiB] / in no VG: 1 [1.00 GiB]
[root@zwj lvm]# vgextend mytestvg /dev/vdb5
  Volume group "mytestvg" successfully extended
[root@zwj lvm]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "mytestvg" using metadata type lvm2
[root@zwj lvm]# vgdisplay
  --- Volume group ---
  VG Name               mytestvg
  System ID
  Format                lvm2
  Metadata Areas        5
  Metadata Sequence No  21
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                5
  Act PV                5
  VG Size               4.92 GiB
  PE Size               16.00 MiB
  Total PE              315
  Alloc PE / Size       251 / 3.92 GiB
  Free  PE / Size       64 / 1.00 GiB
  VG UUID               Pf6hlf-90cR-qU49-lqGX-0eW8-TIkg-Anw6M4

[root@zwj lvm]# lvcreate -l 64 -s -n mytestsnap /dev/mytestvg/mytestlv      #创建/dev/mytestvg/myetest/mytestlv快照  -s表示快照
  Logical volume "mytestsnap" created.
[root@zwj lvm]# lvdisplay                                  
  --- Logical volume ---
  LV Path                /dev/mytestvg/mytestlv
  LV Name                mytestlv
  VG Name                mytestvg
  LV UUID                KiWyMs-Ia9T-O06E-dWRb-wcQ6-P1Ph-kYN0HK
  LV Write Access        read/write
  LV Creation host, time zwj, 2017-05-06 18:10:11 +0800
  LV snapshot status     source of
                         mytestsnap [active]
  LV Status              available
  # open                 1
  LV Size                3.92 GiB
  Current LE             251
  Segments               4
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/mytestvg/mytestsnap
  LV Name                mytestsnap
  VG Name                mytestvg
  LV UUID                ObobTa-rm3b-gfr9-EaQZ-XZsP-hJvM-PDbBNN
  LV Write Access        read/write
  LV Creation host, time zwj, 2017-05-06 21:20:36 +0800
  LV snapshot status     active destination for mytestlv
  LV Status              available
  # open                 0
  LV Size                3.92 GiB
  Current LE             251
  COW-table size         1.00 GiB
  COW-table LE           64
  Allocated to snapshot  0.00%            #表示快照已经使用了多少存储,变更越多,占用存储越多
  Snapshot chunk size    4.00 KiB
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

[root@zwj lvm]# mkdir -p /mnt/snapshot
[root@zwj lvm]# mount /dev/mytestvg/mytestsnap /mnt/snapshot/
[root@zwj lvm]# df
Filesystem           1K-blocks     Used Available Use% Mounted on
/dev/vda1             20641404 13441536   6151352  69% /
/dev/vdb1             20641788   958272  18634924   5% /mydata
/dev/mapper/mytestvg-mytestlv
                       4129472    80920   3841516   3% /mnt/lvm        #刚挂载时刻,完全相同
/dev/mapper/mytestvg-mytestsnap
                       4129472    80920   3841516   3% /mnt/snapshot
[root@zwj lvm]# cp -a /etc /mnt/lvm/                        #改变一下/mnt/lvm内容
[root@zwj lvm]# df
Filesystem           1K-blocks     Used Available Use% Mounted on
/dev/vda1             20641404 13441536   6151352  69% /
/dev/vdb1             20641788   958272  18634924   5% /mydata
/dev/mapper/mytestvg-mytestlv
                       4129472   123644   3798792   4% /mnt/lvm
/dev/mapper/mytestvg-mytestsnap
                       4129472    80920   3841516   3% /mnt/snapshot
[root@zwj lvm]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/mytestvg/mytestlv
  LV Name                mytestlv
  VG Name                mytestvg
  LV UUID                KiWyMs-Ia9T-O06E-dWRb-wcQ6-P1Ph-kYN0HK
  LV Write Access        read/write
  LV Creation host, time zwj, 2017-05-06 18:10:11 +0800
  LV snapshot status     source of
                         mytestsnap [active]
  LV Status              available
  # open                 1
  LV Size                3.92 GiB
  Current LE             251
  Segments               4
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/mytestvg/mytestsnap
  LV Name                mytestsnap
  VG Name                mytestvg
  LV UUID                ObobTa-rm3b-gfr9-EaQZ-XZsP-hJvM-PDbBNN
  LV Write Access        read/write
  LV Creation host, time zwj, 2017-05-06 21:20:36 +0800
  LV snapshot status     active destination for mytestlv
  LV Status              available
  # open                 1
  LV Size                3.92 GiB
  Current LE             251
  COW-table size         1.00 GiB
  COW-table LE           64
  Allocated to snapshot  4.15%            #快照区已经记录了变更,随时可以恢复
  Snapshot chunk size    4.00 KiB
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4
[root@zwj lvm]# ll /mnt/snapshot/
total 20
drwx------ 2 root   root   16384 May  6 18:15 lost+found
drwxr-xr-x 4 weelin weelin  4096 May  6 19:12 test
[root@zwj lvm]# ll /mnt/lvm/
total 24
drwxr-xr-x. 94 root   root    4096 May  6 21:21 etc
drwx------   2 root   root   16384 May  6 18:15 lost+found
drwxr-xr-x   4 weelin weelin  4096 May  6 19:12 test
[root@zwj lvm]#

当/mnt/lvm想要恢复为最开始建立快照时刻的状态时,可以进入/mnt/snapshot,然后打包备份里面的内容,这时 /dev/mytestvg/mytestsnap就可以卸载并删除了,之后格式化/dev/mytestvg/mytestlv,之后重新挂载到/mnt/lvm,之后把之前备份的内容解压到此处:

  • tar -czvf /mnt/backups/lvm.tar.gz /mnt/snapshot/*
  • umount /mnt/snapshot
  • lvremove /dev/mytestvg/mytestsnap
  • umount /mnt/lvm
  • mkfs.ext3 /dev/mytestvg/mytestlv
  • mount /dev/mytestvg/mytestlv /mnt/lvm
  • tar -zxvf /mnt/backups/lvm.tar.gz -C /mnt/lvm

恢复完毕.

  

原文地址:https://www.cnblogs.com/diaosir/p/6816608.html