目的:

Azure上的vm想要得到更好的磁盘性能,就是用多块虚拟盘创建Raid;

因为所有的虚拟盘可靠性是由多个副本保障的,所以无需担心虚拟盘会挂掉,创建RAID0当然是又快开销又低的方式;

如果是数据库用途,一般我们在创建Raid的时候选择稍微小一些的chunk-size,获取更好的IOPS表现,

在vm中来做个测试来看看。

环境:

VM是A2 Standard Vm(2cores 3.5G RAM) Azure North China Region 挂载4块盘,均关闭cache

使用mdadm创建RAID0,Chunk-Size是64k还是512k,我用同一个环境重新创建了两次; 为了方便对比,我把两个测试的相同项目结果排列在一起。

1
2
3
# mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdc /dev/sdd /dev/sde /dev/sdf -c64

# mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdc /dev/sdd /dev/sde /dev/sdf -c512

bonnie测试

512k

1
bonnie -d /RAID0 -s 7000M -m 64k
Version 1.97Sequential OutputSequential InputRandomSeeksSequential CreateRandom Create
SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDelete
K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU
512k7000M70698417426568393661605981219455250.35161023937++++++++2506477965633++++++++2217572
Latency16705us78957us779ms25380us1597ms1363msLatency18409us5790us5922us6011us5774us6109us

64k

Version 1.97Sequential OutputSequential InputRandomSeeksSequential CreateRandom Create
SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDelete
K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU
64k7000M6508840225052512643164198772353258.05162297381++++++++25501752158176++++++++2607086
Latency18599us219ms6120ms22135us906ms699msLatency23675us5768us29869us5894us5737us5765us

fio测试

1
fio -filename=/path/test -iodepth=64 -ioengine=libaio -direct=1 -rw=randwrite -bs=4k -size=30G -numjobs=64 -runtime=30 -group_reporting -name=test-randwrite

512k

test-randwrite: (groupid=0, jobs=64): err= 0: pid=24736: Mon Aug 8 08:09:40 2016

write: io=252184KB, bw=8191.8KB/s, iops=2047, runt= 30785msec

​ slat (usec): min=3, max=2300.8K, avg=30428.38, stdev=176897.42

​ clat (msec): min=4, max=8584, avg=1884.24, stdev=1084.48

​ lat (msec): min=4, max=8965, avg=1914.67, stdev=1109.23

​ clat percentiles (msec):

​ | 1.00th=[ 28], 5.00th=[ 186], 10.00th=[ 519], 20.00th=[ 971],

​ | 30.00th=[ 1319], 40.00th=[ 1549], 50.00th=[ 1696], 60.00th=[ 1991],

​ | 70.00th=[ 2376], 80.00th=[ 2737], 90.00th=[ 3261], 95.00th=[ 3851],

​ | 99.00th=[ 5080], 99.50th=[ 5407], 99.90th=[ 6587], 99.95th=[ 6915],

​ | 99.99th=[ 7570]

​ bw (KB /s): min= 0, max= 2374, per=1.74%, avg=142.74, stdev=151.11

​ lat (msec) : 10=0.30%, 20=0.59%, 50=1.49%, 100=2.03%, 250=1.74%

​ lat (msec) : 500=3.41%, 750=4.58%, 1000=6.18%, 2000=39.93%, >=2000=39.75%

cpu : usr=0.01%, sys=0.04%, ctx=11643, majf=0, minf=523

IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.6%

​ submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

​ complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%

​ issued : total=r=0/w=63046/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):

WRITE: io=252184KB, aggrb=8191KB/s, minb=8191KB/s, maxb=8191KB/s, mint=30785msec, maxt=30785msec

Disk stats (read/write):

​ md0: ios=0/63076, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/15769, aggrmerge=0/0, aggrticks=0/2954701, aggrin_queue=2954698, aggrutil=99.57%

sdc: ios=0/15647, merge=0/0, ticks=0/1465584, in_queue=1465580, util=80.09%

sdd: ios=0/15888, merge=0/0, ticks=0/3313772, in_queue=3313768, util=95.95%

sde: ios=0/15658, merge=0/0, ticks=0/2076856, in_queue=2076856, util=89.20%

sdf: ios=0/15883, merge=0/0, ticks=0/4962592, in_queue=4962588, util=99.57%

64k

test-randwrite: (groupid=0, jobs=64): err= 0: pid=1847: Tue Aug 9 01:10:12 2016

write: io=220980KB, bw=7162.6KB/s, iops=1790, runt= 30852msec

​ slat (usec): min=3, max=3966.5K, avg=34835.41, stdev=217251.26

​ clat (msec): min=4, max=7624, avg=1912.11, stdev=1002.30

​ lat (msec): min=4, max=8186, avg=1946.94, stdev=1030.03

​ clat percentiles (msec):

​ | 1.00th=[ 25], 5.00th=[ 221], 10.00th=[ 510], 20.00th=[ 1139],

​ | 30.00th=[ 1369], 40.00th=[ 1565], 50.00th=[ 1811], 60.00th=[ 2114],

​ | 70.00th=[ 2507], 80.00th=[ 2802], 90.00th=[ 3163], 95.00th=[ 3556],

​ | 99.00th=[ 4293], 99.50th=[ 4883], 99.90th=[ 5669], 99.95th=[ 6915],

​ | 99.99th=[ 7635]

​ bw (KB /s): min= 0, max= 1274, per=1.80%, avg=129.13, stdev=118.40

​ lat (msec) : 10=0.57%, 20=0.37%, 50=0.44%, 100=1.02%, 250=2.99%

​ lat (msec) : 500=4.32%, 750=3.39%, 1000=3.49%, 2000=39.40%, >=2000=44.01%

cpu : usr=0.01%, sys=0.03%, ctx=5226, majf=0, minf=526

IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7%

​ submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

​ complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%

​ issued : total=r=0/w=55245/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):

WRITE: io=220980KB, aggrb=7162KB/s, minb=7162KB/s, maxb=7162KB/s, mint=30852msec, maxt=30852msec

Disk stats (read/write):

​ md127: ios=608/55246, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=152/13811, aggrmerge=0/0, aggrticks=1066/2708033, aggrin_queue=2709091, aggrutil=90.01%

sdc: ios=156/13896, merge=0/0, ticks=1344/4047784, in_queue=4049116, util=90.01%

sdd: ios=144/13867, merge=0/0, ticks=716/3820396, in_queue=3821104, util=88.35%

sde: ios=152/13668, merge=0/0, ticks=1072/690580, in_queue=691652, util=85.05%

sdf: ios=156/13815, merge=0/0, ticks=1132/2273372, in_queue=2274492, util=89.47%

结论

在Azure vm中,选择512的chunk-sizes能获得更好的IO表现 补:找到官方的mysql调优文档提到,确实是512k的chunk-sizes表现更好:

https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-classic-optimize-mysql/#AppendixC