软件磁盘阵列

*由四个分区组成raid 5

*每个分区1G,每个分区最好一样大

*利用一个分区设为sparedisk

*sparedisk与其它raid分区一样大

*将此raid 5 设备挂载待/mnt/raid目录下

1.分区

[root@server3 mnt]# fdisk /dev/vdb

Welcome to fdisk (util-linux 2.23.2).

 

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

 

 

Command (m for help): n

Partition type:

   p   primary (0 primary, 0 extended, 4 free)

   e   extended

Select (default p): p

Partition number (1-4, default 1):

First sector (2048-41943039, default 2048):

Using default value 2048

Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): +1G

Partition 1 of type Linux and of size 1 GiB is set

 

 

 

Command (m for help): p

 

Disk /dev/vdb: 21.5 GB, 21474836480 bytes, 41943040 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: dos

Disk identifier: 0x7afa732b

 

   Device Boot      Start         End      Blocks   Id  System

/dev/vdb1            2048     2099199     1048576   83  Linux

/dev/vdb2         2099200     4196351     1048576   83  Linux

/dev/vdb3         4196352     6293503     1048576   83  Linux

/dev/vdb4         6293504    41943039    17824768    5  Extended

/dev/vdb5         6295552     8392703     1048576   83  Linux

/dev/vdb6         8394752    10491903     1048576   83  Linux

 

[root@server3 ~]# partprobe **可能会出现报错,重启就好

 

2.以mdadm创建RAID

[root@server3 ~]# mdadm --create --auto=yes /dev/md0 --level=5 --raid-devices=4 --spare-devices=1 /dev/vdb{2,3,5,6,7}      <==个人问题,自己由新分了个区

mdadm: Defaulting to version 1.2 metadata

mdadm: array /dev/md0 started.

[root@server3 ~]# mdadm --detail /dev/md0

/dev/md0:   <==RAID设备文件名

        Version : 1.2

  Creation Time : Mon Jan 21 15:41:42 2019    <==被创建时间

     Raid Level : raid5             <==RAID等级

     Array Size : 3142656 (3.00 GiB 3.22 GB)  <==raid可用磁盘量

  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB) <==每个设备的可用量

   Raid Devices : 4             <==用作RAID的设备数量

  Total Devices : 5             <==全部设备数量

    Persistence : Superblock is persistent

 

    Update Time : Mon Jan 21 15:41:52 2019

          State : clean, degraded, recovering

 Active Devices : 4                 <==启动的设备数量

Working Devices : 5                 <==可动作的设备数量

 Failed Devices : 0                 <==出现错误的设备数量

  Spare Devices : 1                 <==预备磁盘数量

 

         Layout : left-symmetric

     Chunk Size : 512K

 

 Rebuild Status : 12% complete

 

           Name : server3:0  (local to host server3)

           UUID : 4c7f9840:5a192f12:004c417e:29d8c02e

         Events : 2

 

    Number   Major   Minor   RaidDevice State

       0     253       18        0      active sync   /dev/vdb2

       1     253       19        1      active sync   /dev/vdb3

       2     253       21        2      active sync   /dev/vdb5

       5     253       22        3      spare rebuilding   /dev/vdb6

 

       4     253       23        -      spare   /dev/vdb7

 

3.磁盘阵列的情况也可用下面这个文件查看

[root@server3 ~]# cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]

md0 : active raid5 vdb6[5] vdb7[4](S) vdb5[2] vdb3[1] vdb2[0]

      3142656 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

     

unused devices: <none>

 

格式化与挂载使用RAID

 

[root@server3 ~]# mkfs.ext4  /dev/md0

mke2fs 1.42.9 (28-Dec-2013)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=128 blocks, Stripe width=384 blocks

196608 inodes, 785664 blocks

39283 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=805306368

24 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

    32768, 98304, 163840, 229376, 294912

 

Allocating group tables: done                           

Writing inode tables: done                           

Creating journal (16384 blocks): done

Writing superblocks and filesystem accounting information: done

 

[root@server3 ~]# cd /mnt/

[root@server3 mnt]# ls raid/     <==自行创建

[root@server3 mnt]# mount /dev/md0  raid/

[root@server3 mnt]# df

Filesystem     1K-blocks    Used Available Use% Mounted on

/dev/vda3       20243456 3441540  16801916  18% /

devtmpfs          493580       0    493580   0% /dev

tmpfs             508248      84    508164   1% /dev/shm

tmpfs             508248   13564    494684   3% /run

tmpfs             508248       0    508248   0% /sys/fs/cgroup

/dev/vda1         201380  133424     67956  67% /boot

tmpfs             101652      20    101632   1% /run/user/42

tmpfs             101652       0    101652   0% /run/user/0

/dev/md0         3027728    9216   2844996   1% /mnt/raid

 

 

4.错误救援模式

    首先设置磁盘为错误

[root@server3 mnt]# cp -a /var/log   raid/    *复制一些东西到/mnt/raid中去

[root@server3 mnt]# df /mnt/raid/ ; du -sm /mnt/raid/

Filesystem     1K-blocks  Used Available Use% Mounted on

/dev/md0         3027728 15404   2838808   1% /mnt/raid

7   /mnt/raid/

 

假设/dev/vdb5出错了

[root@server3 mnt]# mdadm --manage /dev/md0 --fail /dev/vdb5

mdadm: set /dev/vdb5 faulty in /dev/md0

[root@server3 mnt]# mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Mon Jan 21 15:41:42 2019

     Raid Level : raid5

     Array Size : 3142656 (3.00 GiB 3.22 GB)

  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)

   Raid Devices : 4

  Total Devices : 5

    Persistence : Superblock is persistent

 

    Update Time : Mon Jan 21 16:18:46 2019

          State : clean, degraded, recovering

 Active Devices : 3

Working Devices : 4

 Failed Devices : 1

  Spare Devices : 1

 

         Layout : left-symmetric

     Chunk Size : 512K

 

 Rebuild Status : 11% complete

 

           Name : server3:0  (local to host server3)

           UUID : 4c7f9840:5a192f12:004c417e:29d8c02e

         Events : 21

 

    Number   Major   Minor   RaidDevice State

       0     253       18        0      active sync   /dev/vdb2

       1     253       19        1      active sync   /dev/vdb3

       4     253       23        2      spare rebuilding   /dev/vdb7

       5     253       22        3      active sync   /dev/vdb6

 

       2     253       21        -      faulty   /dev/vdb5

[root@server3 mnt]# mdadm --manage /dev/md0 --fail /dev/vdb5

mdadm: set /dev/vdb5 faulty in /dev/md0

[root@server3 mnt]# mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Mon Jan 21 15:41:42 2019

     Raid Level : raid5

     Array Size : 3142656 (3.00 GiB 3.22 GB)

  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)

   Raid Devices : 4

  Total Devices : 5

    Persistence : Superblock is persistent

 

    Update Time : Mon Jan 21 16:18:46 2019

          State : clean, degraded, recovering

 Active Devices : 3

Working Devices : 4

 Failed Devices : 1      **有一个出错了

  Spare Devices : 1

 

         Layout : left-symmetric

     Chunk Size : 512K

 

 Rebuild Status : 11% complete

 

           Name : server3:0  (local to host server3)

           UUID : 4c7f9840:5a192f12:004c417e:29d8c02e

         Events : 21

 

    Number   Major   Minor   RaidDevice State

       0     253       18        0      active sync   /dev/vdb2

       1     253       19        1      active sync   /dev/vdb3

       4     253       23        2      spare rebuilding   /dev/vdb7

       5     253       22        3      active sync   /dev/vdb6

 

       2     253       21        -      faulty   /dev/vdb5

 

 

5.将出错的磁盘删除并加入新磁盘

    建立新的分区

[root@server3 mnt]# fdisk  /dev/vdb

Welcome to fdisk (util-linux 2.23.2).

 

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

 

 

Command (m for help): n

All primary partitions are in use

Adding logical partition 8

First sector (12593152-41943039, default 12593152):

Using default value 12593152

Last sector, +sectors or +size{K,M,G} (12593152-41943039, default 41943039): +1G

Partition 8 of type Linux and of size 1 GiB is set

 

Command (m for help): wq

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

 

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table. The new table will be used at

the next reboot or after you run partprobe(8) or kpartx(8)

Syncing disks.

[root@server3 mnt]# partprobe

    加入新的磁盘拔除由问题的磁盘

[root@server3 mnt]# mdadm --manage /dev/md0 --add /dev/vdb8 --remove /dev/vdb5

mdadm: added /dev/vdb8

mdadm: hot removed /dev/vdb5 from /dev/md0

 

[root@server3 mnt]# mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Mon Jan 21 15:41:42 2019

     Raid Level : raid5

     Array Size : 3142656 (3.00 GiB 3.22 GB)

  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)

   Raid Devices : 4

  Total Devices : 5

    Persistence : Superblock is persistent

 

    Update Time : Mon Jan 21 16:26:07 2019

          State : clean

 Active Devices : 4

Working Devices : 5

 Failed Devices : 0

  Spare Devices : 1

 

         Layout : left-symmetric

     Chunk Size : 512K

 

           Name : server3:0  (local to host server3)

           UUID : 4c7f9840:5a192f12:004c417e:29d8c02e

         Events : 38

 

    Number   Major   Minor   RaidDevice State

       0     253       18        0      active sync   /dev/vdb2

       1     253       19        1      active sync   /dev/vdb3

       4     253       23        2      active sync   /dev/vdb7

       5     253       22        3      active sync   /dev/vdb6

 

       6     253       24        -      spare   /dev/vdb8

 

6.开机自动启动raid并自动挂载

[root@server3 mnt]# vim /etc/fstab

/dev/md0        /mnt/raid       ext4            defaults    1 2

[root@server3 mnt]# mount -a     *检测一下是否有问题

 

 

7.关闭软件raid

[root@server3 mnt]# umount /dev/md0

[root@server3 mnt]# vim /etc/fstab    *删除刚刚写入的那一行

 

[root@server3 mnt]# mdadm  --stop /dev/md0

mdadm: stopped /dev/md0

unused devices: <none>

[root@server3 mnt]# cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]

unused devices: <none>       **不存在任何数据设备

 

 

 

原文地址:https://www.cnblogs.com/zhengyipengyou/p/10301071.html