06磁盘

1.为主机增加80G SCSI 接口硬盘

2.划分三个各20G的主分区

[root@localhost ~]# parted /dev/sdb

(parted) mklabel                                                          

新的磁盘标签类型? gpt                                                    

(parted) mkpart                                                           

分区名称?  []? sdb1                                                      

文件系统类型?  [ext2]? ext4                                              

起始点? 1G                                                               

结束点? 20G                                                              

(parted) mkpart                                                           

分区名称?  []? sdb2                                                      

文件系统类型?  [ext2]? ext4                                              

起始点? 21G                                                              

结束点? 40G                                                              

(parted) mkpart

分区名称?  []? sdb3                                                      

文件系统类型?  [ext2]? ext4                                              

起始点? 41G                                                              

结束点? 60G                                                              

(parted) p                                                                

Model: VMware, VMware Virtual S (scsi)

Disk /dev/sdb: 85.9GB

Sector size (logical/physical): 512B/512B

Partition Table: gpt

Disk Flags:

Number  Start   End     Size    File system  Name  标志

 1      1000MB  20.0GB  19.0GB               sdb1

 2      21.0GB  40.0GB  19.0GB               sdb2

 3      41.0GB  60.0GB  19.0GB               sdb3

(parted) q                                                                

信息: You may need to update /etc/fstab.

3.将三个主分区转换为物理卷(pvcreate),扫描系统中的物理卷

[root@localhost ~]# pvcreate /dev/sdb[123]

  Physical volume "/dev/sdb1" successfully created.

  Physical volume "/dev/sdb2" successfully created.

  Physical volume "/dev/sdb3" successfully created.

[root@localhost ~]# pvscan

  PV /dev/sda2   VG centos          lvm2 [<39.00 GiB / 4.00 MiB free]

  PV /dev/sdb2                      lvm2 [<17.70 GiB]

  PV /dev/sdb1                      lvm2 [17.69 GiB]

  PV /dev/sdb3                      lvm2 [17.69 GiB]

  Total: 4 [92.08 GiB] / in use: 1 [<39.00 GiB] / in no VG: 3 [53.08 GiB]

4.使用两个物理卷创建卷组,名字为myvg,查看卷组大小

[root@localhost ~]# vgcreate myvg /dev/sdb[12]

  Volume group "myvg" successfully created

[root@localhost ~]# vgdisplay myvg

  --- Volume group ---

  VG Name               myvg

  System ID             

  Format                lvm2

  Metadata Areas        2

  Metadata Sequence No  1

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                0

  Open LV               0

  Max PV                0

  Cur PV                2

  Act PV                2

  VG Size               35.38 GiB

  PE Size               4.00 MiB

  Total PE              9058

  Alloc PE / Size       0 / 0   

  Free  PE / Size       9058 / 35.38 GiB

  VG UUID               lqeazi-gvko-Du1i-y0NA-91ci-7824-maQyXe

5.创建逻辑卷mylv,大小为30G

[root@localhost ~]# lvcreate -L 30G -n mylv myvg

  Logical volume "mylv" created.

6.将逻辑卷格式化成xfs文件系统,并挂载到/data目录上,创建文件测试

[root@localhost ~]# mkfs.xfs /dev/myvg/mylv

meta-data=/dev/myvg/mylv         isize=512    agcount=4, agsize=1966080 blks

         =                       sectsz=512   attr=2, projid32bit=1

         =                       crc=1        finobt=0, sparse=0

data     =                       bsize=4096   blocks=7864320, imaxpct=25

         =                       sunit=0      swidth=0 blks

naming   =version 2              bsize=4096   ascii-ci=0 ftype=1

log      =internal log           bsize=4096   blocks=3840, version=2

         =                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@localhost ~]# df -Th

文件系统                类型      容量  已用  可用 已用% 挂载点

/dev/mapper/centos-root xfs        37G  3.9G   33G   11% /

devtmpfs                devtmpfs  1.2G     0  1.2G    0% /dev

tmpfs                   tmpfs     1.2G     0  1.2G    0% /dev/shm

tmpfs                   tmpfs     1.2G   11M  1.2G    1% /run

tmpfs                   tmpfs     1.2G     0  1.2G    0% /sys/fs/cgroup

/dev/sda1               xfs      1014M  166M  849M   17% /boot

tmpfs                   tmpfs     245M   24K  245M    1% /run/user/0

/dev/sr0                iso9660   4.3G  4.3G     0  100% /run/media/root/CentOS 7 x86_64

/dev/mapper/myvg-mylv   xfs        30G   33M   30G    1% /data

7.增大逻辑卷到35G

[root@localhost ~]# lvextend -L +5G /dev/myvg/mylv

  Size of logical volume myvg/mylv changed from 30.00 GiB (7680 extents) to 35.00 GiB (8960 extents).

  Logical volume myvg/mylv successfully resized.

[root@localhost ~]# xfs_growfs /dev/myvg/mylv

meta-data=/dev/mapper/myvg-mylv  isize=512    agcount=4, agsize=1966080 blks

         =                       sectsz=512   attr=2, projid32bit=1

         =                       crc=1        finobt=0 spinodes=0

data     =                       bsize=4096   blocks=7864320, imaxpct=25

         =                       sunit=0      swidth=0 blks

naming   =version 2              bsize=4096   ascii-ci=0 ftype=1

log      =internal               bsize=4096   blocks=3840, version=2

         =                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0

data blocks changed from 7864320 to 9175040

[root@localhost ~]# df -Th

文件系统                类型      容量  已用  可用 已用% 挂载点

/dev/mapper/centos-root xfs        37G  3.9G   33G   11% /

devtmpfs                devtmpfs  1.2G     0  1.2G    0% /dev

tmpfs                   tmpfs     1.2G     0  1.2G    0% /dev/shm

tmpfs                   tmpfs     1.2G   11M  1.2G    1% /run

tmpfs                   tmpfs     1.2G     0  1.2G    0% /sys/fs/cgroup

/dev/sda1               xfs      1014M  166M  849M   17% /boot

tmpfs                   tmpfs     245M   24K  245M    1% /run/user/0

/dev/sr0                iso9660   4.3G  4.3G     0  100% /run/media/root/CentOS 7 x86_64

/dev/mapper/myvg-mylv   xfs        35G   33M   35G    1% /data

8.编辑/etc/fstab文件挂载逻辑卷,并支持磁盘配额选项

[root@localhost ~]# vim /etc/fstab

/dev/myvg/mylv          /data       xfs     defaults,usrquota,grpquota     0 0

9.创建磁盘配额,crushlinux用户在/data目录下文件小软限制为80M,硬限制为100M

crushlinux用户在/data目录下文件数量软限制为80个,硬限制为100个。

[root@localhost ~]# mount -o remount,usrquota,grpquota /data1

/dev/sdb3               /data1      ext4    defaults,usrquota,grpquota    0 0

[root@localhost ~]# mount |grep /data1

/dev/sdb3 on /data1 type ext4 (rw,relatime,seclabel,quota,usrquota,grpquota,data=ordered)

[root@localhost ~]# quotacheck -avug

quotacheck: Skipping /dev/mapper/myvg-mylv [/data]

quotacheck: Your kernel probably supports journaled quota but you are not using it. Consider switching to journaled quota to avoid running quotacheck after an unclean shutdown.

quotacheck: Scanning /dev/sdb3 [/data1] done

quotacheck: Cannot stat old user quota file /data1/aquota.user: 没有那个文件或目录. Usage will not be subtracted.

quotacheck: Cannot stat old group quota file /data1/aquota.group: 没有那个文件或目录. Usage will not be subtracted.

quotacheck: Cannot stat old user quota file /data1/aquota.user: 没有那个文件或目录. Usage will not be subtracted.

quotacheck: Cannot stat old group quota file /data1/aquota.group: 没有那个文件或目录. Usage will not be subtracted.

quotacheck: Checked 3 directories and 0 files

quotacheck: Old file not found.

quotacheck: Old file not found.

[root@localhost ~]# ll /data1/a*

-rw-------. 1 root root 6144 8月   2 09:48 /data1/aquota.group

-rw-------. 1 root root 6144 8月   2 09:48 /data1/aquota.user

[root@localhost ~]# quotaon -auvg

/dev/sdb3 [/data1]: group quotas turned on

/dev/sdb3 [/data1]: user quotas turned on

[root@localhost ~]# edquota -u crushlinux

Disk quotas for user crushlinux (uid 1001):

  Filesystem                   blocks       soft       hard     inodes     soft     hard

  /dev/mapper/myvg-mylv             0          0          0          0        0        0

  /dev/sdb3                         0          8000       10000        0        80       100

~                                                                                                                    

10.使用touch dd 命令在/data目录下测试

crushlinux@localhost home]$ dd if=/dev/zero of=/data1/ceshi bs=1M count=90

sdb3: warning, user block quota exceeded.

sdb3: write failed, user block limit reached.

dd: 写入"/data1/ceshi" 出错: 超出磁盘限额

记录了10+0 的读入

记录了9+0 的写出

10240000字节(10 MB)已复制,0.177268 秒,57.8 MB/

[crushlinux@localhost home]$ touch /data1/{1..85}.txt

sdb3: warning, user file quota exceeded.

11.查看配额的使用情况:用户角度

[root@localhost home]# quota -uvs crushlinux

Disk quotas for user crushlinux (uid 1001):

     Filesystem   space   quota   limit   grace   files   quota   limit   grace

/dev/mapper/myvg-mylv

                     0K      0K      0K               0       0       0        

      /dev/sdb3  10000K*  8000K  10000K   6days      86*     80     100   6days

12.查看配额的使用情况:文件系统角度

[root@localhost home]# repquota -auvs

*** Report for user quotas on device /dev/mapper/myvg-mylv

Block grace time: 7days; Inode grace time: 7days

                        Space limits                File limits

User            used    soft    hard  grace    used  soft  hard  grace

----------------------------------------------------------------------

root      --  92160K      0K      0K              4     0     0       

*** Status for user quotas on device /dev/mapper/myvg-mylv

Accounting: ON; Enforcement: ON

Inode: #67 (2 blocks, 2 extents)

*** Report for user quotas on device /dev/sdb3

Block grace time: 7days; Inode grace time: 7days

                        Space limits                File limits

User            used    soft    hard  grace    used  soft  hard  grace

----------------------------------------------------------------------

root      --     20K      0K      0K              2     0     0       

crushlinux ++  10000K   8000K  10000K  6days      86    80   100  6days

Statistics:

Total blocks: 7

Data blocks: 1

Entries: 2

Used average: 2.000000

原文地址:https://www.cnblogs.com/CAPF/p/11287094.html