Chapder06

1.为主机增加80G SCSI 接口硬盘

2.划分三个各20G的主分区
[root@localhost ~]# fdisk /dev/sdb
欢迎使用 fdisk (util-linux 2.23.2)。

更改将停留在内存中,直到您决定将更改写入磁盘。
使用写入命令前请三思。

Device does not contain a recognized partition table
使用磁盘标识符 0x0fdbcf74 创建新的 DOS 磁盘标签。

命令(输入 m 获取帮助):n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
分区号 (1-4,默认 1):
起始 扇区 (2048-167772159,默认为 2048):
将使用默认值 2048
Last 扇区, +扇区 or +size{K,M,G} (2048-167772159,默认为 167772159):+20G
分区 1 已设置为 Linux 类型,大小设为 20 GiB

命令(输入 m 获取帮助):n
Partition type:
p primary (1 primary, 0 extended, 3 free)
e extended
Select (default p): p
分区号 (2-4,默认 2):
起始 扇区 (41945088-167772159,默认为 41945088):
将使用默认值 41945088
Last 扇区, +扇区 or +size{K,M,G} (41945088-167772159,默认为 167772159):+20G
分区 2 已设置为 Linux 类型,大小设为 20 GiB

命令(输入 m 获取帮助):n
Partition type:
p primary (2 primary, 0 extended, 2 free)
e extended
Select (default p): p
分区号 (3,4,默认 3):
起始 扇区 (83888128-167772159,默认为 83888128):
将使用默认值 83888128
Last 扇区, +扇区 or +size{K,M,G} (83888128-167772159,默认为 167772159):+20G
分区 3 已设置为 Linux 类型,大小设为 20 GiB

命令(输入 m 获取帮助):w

3.将三个主分区转换为物理卷(pvcreate),扫描系统中的物理卷
[root@localhost ~]# pvcreate /dev/sdb[123]
Physical volume "/dev/sdb1" successfully created.
Physical volume "/dev/sdb2" successfully created.
Physical volume "/dev/sdb3" successfully created.
[root@localhost ~]# pvscan /dev/sdb[123]
Command does not accept argument: /dev/sdb1.

4.使用两个物理卷创建卷组,名字为myvg,查看卷组大小
[root@localhost ~]# vgcreate myvg /dev/sdb[12]
Volume group "myvg" successfully created

5.创建逻辑卷mylv,大小为30G
[root@localhost ~]# lvcreate -L 30G -n mylv myvg
Logical volume "mylv" created.
[root@localhost ~]# xfs_growfs /dev/myvg/mylv
[root@localhost ~]# mount /dev/myvg/mylv /a
[root@localhost ~]# df -Th
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/mapper/centos-root xfs 37G 4.9G 33G 13% /
devtmpfs devtmpfs 977M 0 977M 0% /dev
tmpfs tmpfs 993M 0 993M 0% /dev/shm
tmpfs tmpfs 993M 9.0M 984M 1% /run
tmpfs tmpfs 993M 0 993M 0% /sys/fs/cgroup
/dev/sda1 xfs 1014M 161M 854M 16% /boot
tmpfs tmpfs 199M 4.0K 199M 1% /run/user/42
tmpfs tmpfs 199M 20K 199M 1% /run/user/0
/dev/sr0 iso9660 8.1G 8.1G 0 100% /run/media/root/CentOS 7 x86_64
/dev/mapper/myvg-mylv xfs 30G 33M 30G 1% /a

6.将逻辑卷格式化成xfs文件系统,并挂载到/data目录上,创建文件测试
[root@localhost ~]# mkdir /data
[root@localhost ~]# mkfs -t xfs /dev/myvg/mylv | mount /dev/myvg/mylv /data

7.增大逻辑卷到35G
[root@localhost ~]# lvextend -L +5G /dev/myvg/mylv
Size of logical volume myvg/mylv changed from 30.00 GiB (7680 extents) to 35.00 GiB (8960 extents).
Logical volume myvg/mylv successfully resized.
[root@localhost ~]# xfs_growfs /dev/myvg/mylv
meta-data=/dev/mapper/myvg-mylv isize=512 agcount=4, agsize=1966080 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=7864320, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=3840, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 7864320 to 9175040
[root@localhost ~]# df -Th
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/mapper/centos-root xfs 37G 4.9G 33G 13% /
devtmpfs devtmpfs 977M 0 977M 0% /dev
tmpfs tmpfs 993M 0 993M 0% /dev/shm
tmpfs tmpfs 993M 9.0M 984M 1% /run
tmpfs tmpfs 993M 0 993M 0% /sys/fs/cgroup
/dev/sda1 xfs 1014M 161M 854M 16% /boot
tmpfs tmpfs 199M 4.0K 199M 1% /run/user/42
tmpfs tmpfs 199M 20K 199M 1% /run/user/0
/dev/sr0 iso9660 8.1G 8.1G 0 100% /run/media/root/CentOS 7 x86_64
/dev/mapper/myvg-mylv xfs 35G 33M 35G 1% /a

8.编辑/etc/fstab文件挂载逻辑卷,并支持磁盘配额选项
[root@localhost ~]# vim /etc/fstab
[root@localhost ~]# mount -a
[root@localhost ~]# df
文件系统 1K-块 已用 可用 已用% 挂载点
/dev/mapper/centos-root 38770180 5161360 33608820 14% /
devtmpfs 1000024 0 1000024 0% /dev
tmpfs 1015944 0 1015944 0% /dev/shm
tmpfs 1015944 9128 1006816 1% /run
tmpfs 1015944 0 1015944 0% /sys/fs/cgroup
/dev/sda1 1038336 164008 874328 16% /boot
tmpfs 203192 32 203160 1% /run/user/0
/dev/sr0 8490330 8490330 0 100% /run/media/root/CentOS 7 x86_64
/dev/mapper/myvg-mylv 36684800 32984 36651816 1% /data

9.创建磁盘配额,crushlinux用户在/data目录下文件大小软限制为80M,硬限制为100M,
crushlinux用户在/data目录下文件数量软限制为80个,硬限制为100个。

[root@localhost ~]# useradd crushlinux
[root@localhost ~]# tail -1 /etc/passwd
crushlinux:x:1001:1001::/home/crushlinux:/bin/bash
[root@localhost ~]# mkfs.ext4 /dev/sdb3
mke2fs 1.42.9 (28-Dec-2013)
文件系统标签=
OS type: Linux
块大小=4096 (log=2)
分块大小=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5242880 blocks
262144 blocks (5.00%) reserved for the super user
第一个数据块=0
Maximum filesystem blocks=2153775104
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000

Allocating group tables: 完成
正在写入inode表: 完成
Creating journal (32768 blocks): 完成
Writing superblocks and filesystem accounting information: 完成

[root@localhost ~]# mkdir /data1
[root@localhost ~]# mount /dev/sdb3 /data1
[root@localhost ~]# mount -o remount,usrquota,grpquota /data1
[root@localhost ~]# mount |grep /data1
/dev/sdb3 on /data1 type ext4 (rw,relatime,seclabel,quota,usrquota,grpquota,data=ordered)
[root@localhost ~]# grep /dev/sdb3 /etc/mtab
/dev/sdb3 /data1 ext4 rw,seclabel,relatime,quota,usrquota,grpquota,data=ordered 0 0
[root@localhost ~]# vim /etc/fstab
末行添加 /dev/sdb3 /data1 ext4 defaults,usrquota,grpquota 0 0
[root@localhost ~]# quotacheck -avug
quotacheck: Your kernel probably supports journaled quota but you are not using it. Consider switching to journaled quota to avoid running quotacheck after an unclean shutdown.
quotacheck: Scanning /dev/sdb3 [/data1] done
quotacheck: Cannot stat old user quota file /data1/aquota.user: 没有那个文件或目录. Usage will not be subtracted.
quotacheck: Cannot stat old group quota file /data1/aquota.group: 没有那个文件或目录. Usage will not be subtracted.
quotacheck: Cannot stat old user quota file /data1/aquota.user: 没有那个文件或目录. Usage will not be subtracted.
quotacheck: Cannot stat old group quota file /data1/aquota.group: 没有那个文件或目录. Usage will not be subtracted.
quotacheck: Checked 2 directories and 0 files
quotacheck: Old file not found.
quotacheck: Old file not found.
[root@localhost ~]# ll /data1/a*
-rw-------. 1 root root 6144 8月 2 08:19 /data1/aquota.group
-rw-------. 1 root root 6144 8月 2 08:19 /data1/aquota.user
[root@localhost ~]# quotaon -auvg
/dev/sdb3 [/data1]: group quotas turned on
/dev/sdb3 [/data1]: user quotas turned on
[root@localhost ~]# edquota -u crushlinux
修改为 Disk quotas for user crushlinux (uid 1001):
Filesystem blocks soft hard inodes soft hard
/dev/sdb3 0 81920 102400 0 80 100
10.使用touch dd 命令在/data目录下测试
[root@localhost ~]# chmod 777 /data1
[root@localhost ~]# su - crushlinux
上一次登录:五 8月 2 08:24:08 CST 2019pts/1 上
[crushlinux@localhost ~]$ dd if=/dev/zero of=/data1/ceshi bs=1M count=90
sdb3: warning, user block quota exceeded.
记录了90+0 的读入
记录了90+0 的写出
94371840字节(94 MB)已复制,0.0962798 秒,980 MB/秒
[crushlinux@localhost ~]$ touch /data1/{1..90}.txt
sdb3: warning, user file quota exceeded.
[crushlinux@localhost ~]$ su - root
密码:
上一次登录:五 8月 2 08:24:25 CST 2019pts/1 上
[root@localhost ~]# edquota -u crushlinux
[root@localhost ~]# repquota -auvs
*** Report for user quotas on device /dev/sdb3
Block grace time: 7days; Inode grace time: 7days
Space limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 20K 0K 0K 2 0 0
crushlinux ++ 92160K 81920K 100M 7days 91 80 100 7days

Statistics:
Total blocks: 7
Data blocks: 1
Entries: 2
Used average: 2.000000
11.查看配额的使用情况:用户角度
[root@localhost ~]# su - crushlinux
上一次登录:五 8月 2 08:24:52 CST 2019pts/1 上
[crushlinux@localhost ~]$ quota
Disk quotas for user crushlinux (uid 1001):
Filesystem blocks quota limit grace files quota limit grace
/dev/sdb3 92160* 81920 102400 6days 91* 80 100 6days
12.查看配额的使用情况:文件系统角度
[crushlinux@localhost ~]$ su - root
密码:
上一次登录:五 8月 2 08:27:01 CST 2019pts/1 上
[root@localhost ~]# repquota /data1
*** Report for user quotas on device /dev/sdb3
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 20 0 0 2 0 0
crushlinux ++ 92160 81920 102400 6days 91 80 100 6days

原文地址:https://www.cnblogs.com/4545945a/p/11287857.html