Max ZFS codebase for pool: zfs-linux (0.7.5-1ubuntu16.11)
Imagine a pool that grew unexpectedly, by adding larger mirrors in terms of physical disk capacity. Spares went in tow. New mirror, new spare. SAS Enterprise Grade on HBA. Mirror 0 is smaller than 1 and 1 is smaller than 2. Each mirror has an appropriately sized spare.
pool: glue
state: ONLINE
scan: scrub repaired 0B in 27h55m with 0 errors on Mon Jul 12 04:19:14 2021
config:
NAME STATE READ WRITE CKSUM
glue ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000cca2a501f240 ONLINE 0 0 0
wwn-0x5000cca2975af090 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
wwn-0x5000cca271340e4c ONLINE 0 0 0
wwn-0x5000cca27134c71c ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
wwn-0x5000cca2972cce94 ONLINE 0 0 0
wwn-0x5000cca298192df4 ONLINE 0 0 0
spares
wwn-0x5000cca2558480fc AVAIL
wwn-0x5000cca2972be67c AVAIL
wwn-0x5000c50083bbae43 AVAIL
errors: No known data errors
That's what it might look like. If automatic spares use autoreplace
and a small spare tries mirror a disk that is larger than it, will the pool break, or is there an error we can scan for?
Or will autoreplace
do checks to make sure spares join mirrors of the same or smaller size? In that case, is it possible for the largest spare to join the smallest mirror?
I'd be happy to take a look at the code if you can point me. Even more, I would love to give you an upvote and a check.