Score:0

ZFS / Zpool with limited disk usage?

ve flag

I have a server with a JBOD of 36 x 14TB disks that have a total usage capacity of 12.7TB (Disks are WUH721414AL5201)

I have created two zpool's:

  • zpool1 which contains 3 vdev's (Each of them having 8 disks and being raidz-1) with the pool having 3 hot-spares.

  pool: zpool1
 state: ONLINE
config:

        NAME                        STATE     READ WRITE CKSUM
        zpool1                      ONLINE       0     0     0
          raidz1-0                  ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
          raidz1-1                  ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
          raidz1-2                  ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
        spares
          scsi-35000                AVAIL
          scsi-35000                AVAIL
          scsi-35000                AVAIL

  • zpool2 which contains 1 vdev (With 9 disks and being raidz-2)

pool: zpool2
 state: ONLINE
config:

        NAME                        STATE     READ WRITE CKSUM
        zpool2                      ONLINE       0     0     0
          raidz2-0                  ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0
            scsi-35000              ONLINE       0     0     0


As you can see, according to zfs the total usable size of zpool1 is ~306TB and zpool2 is ~115tb with both pools claiming to have multiple TB's of free space.


root:~# zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zpool1   306T   296T  9.55T        -         -    47%    96%  1.00x    ONLINE  -
zpool2   115T   110T  4.46T        -         -    26%    96%  1.00x    ONLINE  -

However when i run df -h i get the following:

root:~# df -h
Filesystem           Size  Used Avail Use% Mounted on
zpool1               250T  250T  5.5M 100% /zpool1
zpool2                85T   84T  684G 100% /zpool2

This is backed up by the filesystem throwing disk full exceptions when i attempt to add anymore data to the pools.

Can someone please confirm if there is some limit im hitting on either linux or zfs? My initial thoughts was there is a 250TB limit, however that does not explain why zpool2 is also at 100% capacity when there is only 85TB of data in the pool.

If my calculations are correct then pool1 should have at least 266.7TB of usable space and pool2 should have 88.9TB which is based on the following calculations:

pool1: 3 x raidz-1 vdevs (8 disks, 7 usable) = 7 * 3 = 21 disks * 12.7TB each = 266.7

pool2: 1 x raidz-2 vdev (9 disks, 7 usable) = 7 * 12.7TB each = 88.9TB

PS: Apologies for such a long post, i am quite new to storage so i have tried to explain as much as possible (Maybe too much!)

Added for Zoredache:

root:~# zfs list
NAME     USED  AVAIL     REFER  MOUNTPOINT
zpool1   249T  2.45M      249T   /zpool1
zpool2   83.9T   683G     83.9T  /zpool2

Score:2
in flag

The df command is an ancient tool, and doesn't doesn't really understand ZFS, the output from df for a ZFS filessytem is close to worthless. ZFS has more complicated allocation then what the df command understands. ZFS has features like snapshots, compression, de-duplication and more that all impact the usage and available capacity but will not change the apparent usage from perspective of 'df'.

You should be using zfs list and zpool list to inspect pools and zfs filesystems. Your zpool output clearly shows you are nearing the maximum capacity. The zfs list command will give more you details per dataset.

Also consider datasets can have quotas. They will have certain amounts of space reserved for critical filesystem storage to prevent your system from crashing.

Related, you shouldn't let your pools get to the point where they are as full as your that full. It will hurt the performance of your pools.

Birdy avatar
ve flag
Thank you for the detailed answer, in an ideal world i would rollback the initial zfs setup to restructure it but unfortunately we dont have an extra 500TB kicking about :-( and like you say... its likely ZFS performance is taking a hit especially when zfs list shows the following: NAME USED AVAIL REFER MOUNTPOINT zpool1 249T 2.45M 249T /zpool1 zpool2 83.9T 683G 83.9T /zpool2
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.