So I span up one of these instances to test for myself. My steps were only a little different:
- Partition the disk first using
parted
- Make the filesystem
- Mount at
/opt
as /home
was already there and had my user's home directory in (ubuntu
).
apt update && apt upgrade
, then install fio
- Run the same command as you:
fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Rand_Read_Testing
from within /opt
, with sudo
.
I got similar results, with read: IOPS=7147
.
I then ran another test:
/opt$ sudo fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randrw --rwmixread=75
fiotest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.16
Starting 1 process
fiotest: Laying out IO file (1 file / 8192MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=332MiB/s,w=109MiB/s][r=85.1k,w=28.0k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=26470: Mon Jan 31 09:14:45 2022
read: IOPS=91.5k, BW=357MiB/s (375MB/s)(6141MiB/17187msec)
bw ( KiB/s): min=339568, max=509896, per=100.00%, avg=366195.29, stdev=59791.96, samples=34
iops : min=84892, max=127474, avg=91548.82, stdev=14947.99, samples=34
write: IOPS=30.5k, BW=119MiB/s (125MB/s)(2051MiB/17187msec); 0 zone resets
bw ( KiB/s): min=111264, max=170424, per=100.00%, avg=122280.71, stdev=20225.33, samples=34
iops : min=27816, max=42606, avg=30570.18, stdev=5056.32, samples=34
cpu : usr=19.73%, sys=41.60%, ctx=742611, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=1572145,525007,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=357MiB/s (375MB/s), 357MiB/s-357MiB/s (375MB/s-375MB/s), io=6141MiB (6440MB), run=17187-17187msec
WRITE: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=2051MiB (2150MB), run=17187-17187msec
Disk stats (read/write):
nvme1n1: ios=1563986/522310, merge=0/0, ticks=927244/24031, in_queue=951275, util=99.46%
...which looks a lot better - read: IOPS=91.5k
.
I suspect it's due to how the read-only test works? Or some nuance of reading off the disk you're on, and some other limitation?
I ran my test a couple more times and got similar results each time.
I then ran another read-only test using the command from here, and got this:
/opt$ sudo fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randread
fiotest: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.16
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=332MiB/s][r=85.1k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=26503: Mon Jan 31 09:17:57 2022
read: IOPS=88.6k, BW=346MiB/s (363MB/s)(8192MiB/23663msec)
bw ( KiB/s): min=339560, max=787720, per=100.00%, avg=354565.45, stdev=72963.81, samples=47
iops : min=84890, max=196930, avg=88641.40, stdev=18240.94, samples=47
cpu : usr=15.37%, sys=31.05%, ctx=844523, majf=0, minf=72
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=2097152,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=346MiB/s (363MB/s), 346MiB/s-346MiB/s (363MB/s-363MB/s), io=8192MiB (8590MB), run=23663-23663msec
Disk stats (read/write):
nvme1n1: ios=2095751/1, merge=0/0, ticks=1468160/0, in_queue=1468159, util=99.64%
So much better read performance. I suspect the arguments you gave your command are not allowing the test to get the best performance from the disk, maybe due to block size, file size, etc. I did notice they were all single-dashed arguments (e.g. -bs=4k
) not double (--bs=4k
), so they might not even be being parsed correctly...