- I'm trying out Proxmox 7.3-3 as a hypervisor and managed to install it using
ZFS
on my NVMe (Samsung_SSD_970_EVO_Plus_2TB_S6S2NS0T505403V
)
- I also installed a Ubuntu 22.04 VM and its filesystem is
ext4
- Next, I installed MySQL server (8.0.32)
- I normally don't tune MySQL, its performance is acceptable as-is
- However, it took 3 minutes to ingest a 20MB uncompressed SQL file using
mysql < ...
- This is much slower than when the same Ubuntu guest ran on Windows 10 VirtualBox (a 20MB ingest will have taken less than 30 seconds usually)
Any idea what I might be missing here?
UPDATE 1
iostat
from guest, I'm seeing 100% %util
in red, why would it be 100?
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
dm-0 0.00 0.00 0.00 0.00 0.00 0.00 1448.00 6792.00 0.00 0.00 0.67 4.69 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.98 100.00
loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 1075.00 6792.00 373.00 25.76 0.93 6.32 0.00 0.00 0.00 0.00 0.00 0.00 492.00 1.91 1.94 100.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sr0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
iostat
from proxmox, also 100%
avg-cpu: %user %nice %system %iowait %steal %idle
1.44 0.00 0.75 5.77 0.00 92.04
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme1n1 484.00 0.00 0.00 0.00 1.77 0.00 814.00 20328.00 0.00 0.00 0.02 24.97 0.00 0.00 0.00 0.00 0.00 0.00 484.00 1.77 1.74 100.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sde 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
zd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
zd16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
zd32 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
zd48 0.00 0.00 0.00 0.00 0.00 0.00 737.00 4916.00 0.00 0.00 0.00 6.67 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
zd64 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
zd80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
proxmox's zpool iostat -v -l 1
capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim
pool alloc free read write read write read write read write read write read write wait wait
--------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
rpool 208G 1.61T 18 495 76.0K 8.00M 98us 25us 98us 25us 394ns 374ns - - - -
nvme-eui.0025385521403c96-part3 208G 1.61T 18 495 76.0K 8.00M 98us 25us 98us 25us 394ns 374ns - - - -
--------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
UPDATE 2
root@pve:~# zfs get all | grep "sync\|logbias"
rpool logbias latency default
rpool sync standard local
rpool/ROOT logbias latency default
rpool/ROOT sync standard inherited from rpool
rpool/ROOT/pve-1 logbias latency default
rpool/ROOT/pve-1 sync standard inherited from rpool
rpool/data logbias latency default
rpool/data sync standard inherited from rpool
rpool/data/vm-100-disk-0 logbias latency default
rpool/data/vm-100-disk-0 sync standard inherited from rpool
rpool/data/vm-100-disk-1 logbias latency default
rpool/data/vm-100-disk-1 sync standard inherited from rpool
rpool/data/vm-101-disk-0 logbias latency default
rpool/data/vm-101-disk-0 sync standard inherited from rpool
rpool/data/vm-101-disk-1 logbias latency default
rpool/data/vm-101-disk-1 sync standard inherited from rpool
rpool/data/vm-102-disk-0 logbias latency default
rpool/data/vm-102-disk-0 sync standard inherited from rpool
rpool/data/vm-102-disk-1 logbias latency default
rpool/data/vm-102-disk-1 sync standard inherited from rpool
root@pve:~#