I am using Ubuntu 20.04 LTS, Kernel 5.8.0-55 and have my filesystem on a md/software RAID1 consisting of two SATA hard disks as my home server system. 32 GB RAM, 4 cores (Intel Core i5-3450). It has been like that since years, and always was ok, but since some time the I/O performance/latency became worse and worse ("some time" at least in my perception, I am pretty sure it has not always been that bad).
Doing a simple dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
gives me ">75 wa" in top and gets my system from "0.something" to "4" or "5", i.e. even a simple dd already queues up and causes a bottleneck during those short times.
I recently had a big multi-hour compile job running where I had to constraint the compiling process with cgroups to 1% CPU usage, because when it ran unconstrainted (i.e CPU load at near 100%) it basically brought my system to a screeching halt with a system load of > 250 because IMMEDIATELY the wait I/O numbers got up to 90+ for all cores in "top"! As soon as something needs I/O for longer than some seconds this seems to become a heavy burden for my system.
The disks are far from being high-performance, yet the perceived I/O performance is definitely sub-par even for those, also with respect to similar posts where we're talking about 2-3x the write speeds for home systems?
$ sudo hdparm -I /dev/sdd
/dev/sdd:
ATA device, with non-removable media
Model Number: TOSHIBA MQ01ABD100
Serial Number: 23CVTYHET
Firmware Revision: AX001U
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6
$ sudo hdparm -I /dev/sde
/dev/sde:
ATA device, with non-removable media
Model Number: ST1000LM024 HN-M101MBB
Serial Number: S2ZWJ9KG902786
Firmware Revision: 2BA30001
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0
I know those are by far not the fastest disks these days, but it was always ok-ish in the last years.
$ sudo hdparm -t /dev/sdd
/dev/sdd:
Timing buffered disk reads: 176 MB in 3.01 seconds = 58.47 MB/sec
$ sudo hdparm -t /dev/sde
/dev/sde:
Timing buffered disk reads: 266 MB in 3.02 seconds = 88.18 MB/sec
$ sudo hdparm -T /dev/sde
/dev/sde:
Timing cached reads: 18882 MB in 1.98 seconds = 9543.70 MB/sec
$ sudo hdparm -T /dev/sdd
/dev/sdd:
Timing cached reads: 18484 MB in 1.98 seconds = 9340.48 MB/sec
$ sudo hdparm -W /dev/sdd
/dev/sdd:
write-caching = 1 (on)
$ sudo hdparm -W /dev/sde
/dev/sde:
write-caching = 1 (on)
/tmp is mounted on the RAID (/dev/md0)
$ dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 27.6059 s, 38.9 MB/s
$ cat /sys/block/sde/queue/scheduler
none [mq-deadline]
$ cat /sys/block/sdd/queue/scheduler
none [mq-deadline]
$ cat /sys/block/md0/queue/scheduler
none
$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Jun 25 17:40:19 2016
Raid Level : raid1
Array Size : 952015872 (907.91 GiB 974.86 GB)
Used Dev Size : 952015872 (907.91 GiB 974.86 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Jan 29 13:44:00 2022
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : bigigloo:0 (local to host xxx)
UUID : af846648:6181b04f:d98b2908:602142da
Events : 336196
Number Major Minor RaidDevice State
0 8 65 0 active sync /dev/sde1
1 8 49 1 active sync /dev/sdd1
Is there anything I can check, which might be constraining my I/O performance or is it really just the bad material aka the old disks which are just no longer good enough to keep up with today's demands? If I forgot to add any details of the system for that post, let me know and I am happy to add it.