My 3-disk ZFS pool has been scrubbed several times after replacing failing disks.
"zpool status -v" provided a list of files which were deleted and removed a good number of errors.
Despite running a couple of "zpool scrub" processes to completion, I am still getting output as follows:
root@ra:/root# zpool status -v | head -25
pool: backup
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub canceled on Thu Jun 1 16:23:28 2023
config:
NAME STATE READ WRITE CKSUM
backup ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
wwn-0x50014ee2bf658d03 ONLINE 0 0 1.25K
wwn-0x5000c500e5ee7f5b ONLINE 0 0 1.48K
wwn-0x5000c500e5ef7436 ONLINE 0 0 1.48K
errors: Permanent errors have been detected in the following files:
backup:<0x2ed701>
backup:<0x2e1d02>
backup:<0x235104>
backup:<0x2e1d05>
backup:<0x2e1d06>
backup:<0x2e1e06>
My forays into the docs have shown no way of repairing this. The pool as a whole can be seen to have zero READ/WRITE/CKSUM errors, as does the disk array. Each member of the array, however, has a significant number of CKSUM errors.
Should I be concerned? Will the pool continue to remain error free until more hardware starts to fail?
Thanks,
Paul