I have a server with a JBOD of 36 x 14TB disks that have a total usage capacity of 12.7TB (Disks are WUH721414AL5201)
I have created two zpool's:
- zpool1 which contains 3 vdev's (Each of them having 8 disks and being raidz-1) with the pool having 3 hot-spares.
pool: zpool1
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zpool1 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
raidz1-2 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
spares
scsi-35000 AVAIL
scsi-35000 AVAIL
scsi-35000 AVAIL
- zpool2 which contains 1 vdev (With 9 disks and being raidz-2)
pool: zpool2
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zpool2 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
scsi-35000 ONLINE 0 0 0
As you can see, according to zfs the total usable size of zpool1 is ~306TB and zpool2 is ~115tb with both pools claiming to have multiple TB's of free space.
root:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zpool1 306T 296T 9.55T - - 47% 96% 1.00x ONLINE -
zpool2 115T 110T 4.46T - - 26% 96% 1.00x ONLINE -
However when i run df -h i get the following:
root:~# df -h
Filesystem Size Used Avail Use% Mounted on
zpool1 250T 250T 5.5M 100% /zpool1
zpool2 85T 84T 684G 100% /zpool2
This is backed up by the filesystem throwing disk full exceptions when i attempt to add anymore data to the pools.
Can someone please confirm if there is some limit im hitting on either linux or zfs? My initial thoughts was there is a 250TB limit, however that does not explain why zpool2 is also at 100% capacity when there is only 85TB of data in the pool.
If my calculations are correct then pool1 should have at least 266.7TB of usable space and pool2 should have 88.9TB which is based on the following calculations:
pool1:
3 x raidz-1 vdevs (8 disks, 7 usable) = 7 * 3 = 21 disks * 12.7TB each = 266.7
pool2:
1 x raidz-2 vdev (9 disks, 7 usable) = 7 * 12.7TB each = 88.9TB
PS: Apologies for such a long post, i am quite new to storage so i have tried to explain as much as possible (Maybe too much!)
Added for Zoredache:
root:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zpool1 249T 2.45M 249T /zpool1
zpool2 83.9T 683G 83.9T /zpool2