I've a zfs pool (raid10) of 4 hdd (2 tb) within my proxmox installation.
Today i tried to overwrite the free memory with zero data on the root partition.
Actually i wrote over 12tb without any errors
-rw-r--r-- 1 root root 5,2T 23. Nov 22:40 file1
-rw-r--r-- 1 root root 4,1T 23. Nov 23:01 file2
-rw-r--r-- 1 root root 2,9T 23. Nov 23:29 file3
The space haven't changed at all, and it seems that i'm able to write and write .. forever .. to my disc.
df -h
Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf
udev 16G 0 16G 0% /dev
tmpfs 3,2G 1,2M 3,2G 1% /run
rpool/ROOT/pve-1 3,6T 14G 3,5T 1% /
tmpfs 16G 46M 16G 1% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
rpool 3,5T 128K 3,5T 1% /rpool
rpool/ROOT 3,5T 128K 3,5T 1% /rpool/ROOT
rpool/data 3,5T 128K 3,5T 1% /rpool/data
/dev/fuse 128M 32K 128M 1% /etc/pve
tmpfs 3,2G 0 3,2G 0% /run/user/0
zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 3.62T 14.0G 3.61T - - 7% 0% 1.00x ONLINE -
mirror 1.81T 7.05G 1.81T - - 7% 0.37% - ONLINE
ata-HGST_HUS724020ALA640_PN1134P6KR3SVW-part3 - - - - - - - - ONLINE
ata-HGST_HUS724020ALA640_PN1134P6HGRGXN-part3 - - - - - - - - ONLINE
mirror 1.81T 6.96G 1.81T - - 7% 0.37% - ONLINE
ata-HGST_HUS724020ALA640_PN1134P6HH2TUN-part3 - - - - - - - - ONLINE
ata-HGST_HUS724020ALA640_PN1134P6JJTK4S-part3 - - - - - - - - ONLINE
zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 3.62T 14.0G 3.61T - - 7% 0% 1.00x ONLINE -
ZfS is new to me, but that's kind of strange at any way. How can I, or the system ever be aware to know wether a source have free space or not?
Reboots haven't change a thing.
Sorry for the german within the linux commands.
May anybody explain me this behaivor or is this a bug?
Regards