Score:0

is the PERC controller limited to 1M of stripe size?

jp flag

I have a dell powerdge linux (CentosOS8) server with a perc h740p controller and I am testing different strip/chunk sizes for a 10+2 RAID6 array of disks.

For what I understood after spending days reading docs on the internet, under linux the stripe size ( stripSize* DataDisks) is saved into the value "/sys/block/sdX/queue/optimal_io_size" and XFS itself relies on this value to set correct alignment of the partition.

So trying with values <=128k this optimal_io_size is exactly the strip/chunk size multiplied for the number of involved data disks, e.g. 128k*10. Also I can run successfully a mkfs.xfs -d su=128k,sw=10 without any errors.
Using values of the strip size from 256k to 1M instead what I see is that optimal_io_size is fixed at 1M while I was expecting, e.g. 256k *10 = 2M, and my mkfs.xfs complies with an error such as "Specified data stripe width YYYY is not the same as the volume stripe width ZZZZ"

I don't know if this is by design (I have read optimal stripe size should be between 1M to 2M) and in this case I would like to understand how data are exactly striped between 12 disks.

pink0.pallino avatar
jp flag
Answering on my own: after some post into a Dell's forum it came out that PERC controller sets the stripe size at 1MB when the StripSize*StripeWidth is larger than 1MB
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.