使用fio测试磁盘I/O性能

简介:

fio是测试IOPS的非常好的工具,用来对硬件进行压力测试和验证,支持13种不同的I/O引擎,包括:sync,mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio 等等。

安装fio:

官网网站:http://brick.kernel.dk/snaps/

wget http://brick.kernel.dk/snaps/fio-2.2.10.tar.gz (从官网下载fio的tar包)
tar -zxvf fio-2.2.10.tar.gz                 (解压)
cd fio-2.2.10
make && make install                  (安装)
yum install libaio-devel                 (为了支持同步I/O)

fio分随机读,随机写,顺序读,顺序写,混合随机读写模式。

fio参数:

filename=/dev/sdb    #测试文件(设备)的名称,通常选择需要测试的data目录(可以通过冒号分割同时指定多个文件,如filename=/dev/sda:/dev/sdb。)

direct=1             #测试过程绕过机器自带的buffer(缓冲区),使测试结果更真实

rw=randwrite         ·     #测试随机写的I/O

rw=randread             #测试随机读的I/O

rw=randrw            #测试随机读写的I/O

bs=4k              #单次I/O的块文件大小为4K

size=20G           #本次测试的文件大小为20G,以每次4K的I/O进行测试

numjobs=30         #本次的测试线程为30个

runtime=600       #测试时间为600秒,如果不写,则一直将20G文件分每次4K写完为止

ioengine=psync      #I/O引擎使用psync方式(I/O引擎,现在fio支持19种ioengine。默认值是sync同步阻塞I/O,libaio是Linux的native异步I/O。关于同步异步,阻塞和非阻塞模型可以参考文章“使用异步 I/O 大大提高应用程序的性能”。
http://www.ibm.com/developerworks/cn/linux/l-async/)

group_reporting     #关于显示结果的,汇总每个进程的信息

name          #指定job的名字,在命令行中表示新启动一个job。

iodepth         #如果ioengine采用异步方式,该参数表示一批提交保持的io单元数。(该参数可参考文章“Fio压测工具和io队列深度理解和误区”。http://blog.yufeng.info/archives/2104)

thread          #线程

fio实例:

随机读4K:

fio -filename=/dev/sdb:/dev/sdc:/dev/sdd -direct=1 -iodepth=16 -thread -rw=randread -ioengine=libaio -bs=4k -size=20G -numjobs=30 -runtime=600 -group_reporting -name=mytest

随机写4K:

fio -filename=/dev/sdb:/dev/sdc:/dev/sdd -direct=1 -iodepth=16 -thread -rw=randwrite -ioengine=libaio -bs=4k -size=20G -numjobs=30 -runtime=600 -group_reporting -name=mytest

顺序读1M:

fio -filename=/dev/sdb:/dev/sdc:/dev/sdd -direct=1 -iodepth=16 -thread -rw=read -ioengine=libaio -bs=1M -size=20G -numjobs=30 -runtime=600 -group_reporting -name=mytest

顺序写1M:

fio -filename=/dev/sdb:/dev/sdc:/dev/sdd -direct=1 -iodepth=16 -thread -rw=write -ioengine=libaio -bs=1M -size=20G -numjobs=30 -runtime=600 -group_reporting -name=mytest

混合随机读写:

fio -filename=/dev/sdb:/dev/sdc:/dev/sdd -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=100 -group_reporting -name=mytest -ioscheduler=noop

实例测试:

注:主要关注结果用橙色标出

 4k随机读:

[root@localhost ~]# fio -filename=/dev/sda:/dev/sdb:/dev/sdc -direct=1 -iodepth 1 -thread -rw=randread -ioengine=libaio -bs=4k -size=20G -numjobs=30 -runtime=100 -group_reporting -name=mytest
mytest: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
...
fio-2.21
Starting 30 threads
Jobs: 30 (f=90): [r(30)][100.0%][r=11.2MiB/s,w=0KiB/s][r=2875,w=0 IOPS][eta 00m:00s]
mytest: (groupid=0, jobs=30): err= 0: pid=26843: Thu Jun 22 09:47:06 2017
   read: IOPS=2756, BW=10.8MiB/s (11.3MB/s)(1077MiB/100033msec)
    slat (usec): min=6, max=1137, avg=21.55, stdev= 9.74
    clat (usec): min=304, max=675395, avg=10853.67, stdev=12909.46
     lat (usec): min=334, max=675425, avg=10875.94, stdev=12909.57
    clat percentiles (usec):
     |  1.00th=[ 1272],  5.00th=[ 2480], 10.00th=[ 3536], 20.00th=[ 5216],
     | 30.00th=[ 6560], 40.00th=[ 7840], 50.00th=[ 9152], 60.00th=[10560],
     | 70.00th=[12224], 80.00th=[14144], 90.00th=[17280], 95.00th=[21632],
     | 99.00th=[46336], 99.50th=[66048], 99.90th=[136192], 99.95th=[179200],
     | 99.99th=[651264]
   bw (  KiB/s): min=   16, max=  553, per=0.00%, avg=369.92, stdev=100.81
    lat (usec) : 500=0.11%, 750=0.12%, 1000=0.27%
    lat (msec) : 2=2.59%, 4=9.55%, 10=43.26%, 20=37.74%, 50=5.51%
    lat (msec) : 100=0.63%, 250=0.20%, 500=0.01%, 750=0.01%
  cpu          : usr=0.09%, sys=0.32%, ctx=275751, majf=0, minf=69
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=275725,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=10.8MiB/s (11.3MB/s), 10.8MiB/s-10.8MiB/s (11.3MB/s-11.3MB/s), io=1077MiB (1129MB), run=100033-100033msec

Disk stats (read/write):
  sda: ios=91772/0, merge=0/0, ticks=1014292/0, in_queue=1014284, util=92.52%
  sdb: ios=91762/0, merge=0/0, ticks=923274/0, in_queue=923228, util=84.34%
  sdc: ios=91745/0, merge=0/0, ticks=1042310/0, in_queue=1043251, util=91.21%

混合随机读写:

[root@localhost ~]# fio -filename=/dev/sda:/dev/sdb:/dev/sdc -direct=1 -iodepth 1 -thread -rw=randrw -ioengine=libaio -bs=4k -size=20G -numjobs=30 -runtime=100 -group_reporting -name=mytest
mytest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
...
fio-2.21
Starting 30 threads
Jobs: 30 (f=90): [m(30)][100.0%][r=404KiB/s,w=464KiB/s][r=101,w=116 IOPS][eta 00m:00s]
mytest: (groupid=0, jobs=30): err= 0: pid=26892: Thu Jun 22 09:57:34 2017
   read: IOPS=86, BW=345KiB/s (353kB/s)(33.7MiB/100117msec)
    slat (usec): min=9, max=168, avg=30.70, stdev=11.21
    clat (usec): min=464, max=764655, avg=50031.54, stdev=79240.40
     lat (usec): min=486, max=764705, avg=50063.31, stdev=79240.09
    clat percentiles (usec):
     |  1.00th=[  756],  5.00th=[ 1272], 10.00th=[ 1688], 20.00th=[ 2672],
     | 30.00th=[ 5280], 40.00th=[ 8512], 50.00th=[10944], 60.00th=[23424],
     | 70.00th=[45824], 80.00th=[86528], 90.00th=[154624], 95.00th=[216064],
     | 99.00th=[350208], 99.50th=[415744], 99.90th=[577536], 99.95th=[651264],
     | 99.99th=[765952]
   bw (  KiB/s): min=    7, max=   81, per=0.01%, avg=18.68, stdev=12.27
  write: IOPS=85, BW=344KiB/s (352kB/s)(33.6MiB/100117msec)
    slat (usec): min=10, max=914, avg=34.27, stdev=15.34
    clat (msec): min=3, max=1779, avg=297.95, stdev=232.05
     lat (msec): min=3, max=1779, avg=297.99, stdev=232.05
    clat percentiles (msec):
     |  1.00th=[    9],  5.00th=[   27], 10.00th=[   56], 20.00th=[  109],
     | 30.00th=[  153], 40.00th=[  194], 50.00th=[  239], 60.00th=[  297],
     | 70.00th=[  367], 80.00th=[  461], 90.00th=[  611], 95.00th=[  758],
     | 99.00th=[ 1045], 99.50th=[ 1221], 99.90th=[ 1434], 99.95th=[ 1516],
     | 99.99th=[ 1778]
   bw (  KiB/s): min=    7, max=   48, per=0.00%, avg=13.50, stdev= 6.61
    lat (usec) : 500=0.02%, 750=0.48%, 1000=0.67%
    lat (msec) : 2=5.84%, 4=5.92%, 10=10.74%, 20=7.24%, 50=9.34%
    lat (msec) : 100=10.07%, 250=23.98%, 500=17.14%, 750=5.98%, 1000=1.94%
    lat (msec) : 2000=0.64%
  cpu          : usr=0.01%, sys=0.03%, ctx=17375, majf=0, minf=39
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=8637,8610,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=345KiB/s (353kB/s), 345KiB/s-345KiB/s (353kB/s-353kB/s), io=33.7MiB (35.4MB), run=100117-100117msec
  WRITE: bw=344KiB/s (352kB/s), 344KiB/s-344KiB/s (352kB/s-352kB/s), io=33.6MiB (35.3MB), run=100117-100117msec

Disk stats (read/write):
  sda: ios=2965/2885, merge=0/0, ticks=155272/883659, in_queue=1040854, util=100.00%
  sdb: ios=3068/2826, merge=0/0, ticks=131988/846676, in_queue=981819, util=99.66%
  sdc: ios=3091/2870, merge=0/0, ticks=150587/826686, in_queue=984877, util=99.90%

 硬盘性能指标:

顺序读写 (吞吐量,常用单位为MB/s):文件在硬盘上存储位置是连续的。

适用场景:大文件拷贝(比如视频音乐)。速度即使很高,对数据库性能也没有参考价值。

4K随机读写 (IOPS,常用单位为次):在硬盘上随机位置读写数据,每次4KB。

适用场景:操作系统运行、软件运行、数据库。

原文地址:https://www.cnblogs.com/zhangjianghua/p/7062174.html