Score:0

LVM Thin pool performance with NVMe

cn flag

I have 24 Samsung PM1733 7.68 TB and a server platform Gigabyte R282-Z94 with 2x 7702 64cores AMD EPYC. OS: Oracle Linux 8.6, 5.4.17-2136.311.6.el8uek.x86_64

I need this server for testing so I have to use a thin pool in my environment (i need snaps). I also use LVM because I have used it before. But I tried ZFS and could not get better results.

The main question sounds like this: How to get maximum performance using thin pools.

What I do:

vgcreate vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1

lvcreate -n thin_pool_1 -L 20T vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 -i 4 -I 4

-i4 for striping between all disks, -I4 strip size. Also I tried 8, 16, 32... In my setup I cant find a big difference.

lvcreate -n pool_meta -L 15G vg1 /dev/nvme4n1

lvconvert --type thin-pool --poolmetadata vg1/pool_meta vg1/thin_pool_1

lvchange -Zn vg1/thin_pool_1 - for disable zeroing.

lvcreate -V 15000G --thin -n data vg1/thin_pool_1

After that I create a load using the FIO with parameters:

fio --filename=/dev/mapper/vg1-data --rw=randwrite --bs=4k --name=test --numjobs=32 --iodepth=32 --random_generator=tausworthe64 --numa_cpu_nodes=0 --direct=1

I only get 40k iops, while one drive at the same load easily gives 130k iops.

I don't understand what I'm missing when configuring the system.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.