Originally on bkp-01 instance. The server and disk have manually been set to a different availability zone. The disk are running zfs for backups/snapshots. It's an old server with a manual Raid system.
Now if we check :
root@bkp-01:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
UUID=g5e223df-f541-4e4c-aa8d-e4529fa28424 / ext4 errors=remount-ro 0 1
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
UUID=98a5392a-3fbc-67aa-bd15-b3e6f125c27a none swap sw 0 0
/storage/backups/revitalis /storage/mount-revitalis ecryptfs ecryptfs_cipher=aes,ecryptfs_key_bytes=32,key=passphrase:passfile=/root/tmp-pass-revitalis.txt,ecryptfs_passthrough=n,no_sig_cache,ecryptfs_enable_filename_crypto=n,noauto 0 0
/storage/backups/cromwell /storage/mount-cromwell ecryptfs ecryptfs_cipher=aes,ecryptfs_key_bytes=32,key=passphrase:passfile=/root/tmp-pass-cromwell.txt,ecryptfs_passthrough=n,no_sig_cache,ecryptfs_enable_filename_crypto=n,noauto 0 0
/storage/backups/cilon /storage/mount-cilon ecryptfs ecryptfs_cipher=aes,ecryptfs_key_bytes=32,key=passphrase:passfile=/root/tmp-pass-cilon.txt,ecryptfs_passthrough=n,no_sig_cache,ecryptfs_enable_filename_crypto=n,noauto 0 0
/storage/backups/glueckauf /storage/mount-glueckauf ecryptfs ecryptfs_cipher=aes,ecryptfs_key_bytes=32,key=passphrase:passfile=/root/tmp-pass-glueckauf.txt,ecryptfs_passthrough=n,no_sig_cache,ecryptfs_enable_filename_crypto=n,noauto 0 0
And
root@bkp-01:~# ls -l /root/tmp-pass-revitalis.txt
-rw------- 1 root root 40 Sep 2 2020 /root/tmp-pass-revitalis.txt
root@bkp-01:~# ls -ld /storage/backups/revitalis
ls: cannot access /storage/backups/revitalis: No such file or directory
root@bkp-01:~# ls -ld /storage/mount-revitalis
drwxr-xr-x 2 root root 4096 Aug 14 10:51 /storage/mount-revitalis
root@bkp-01:~#
: we had the below
A working setup for an older server (bkp-01) type should look like that:
Before the restart
root@bkp-01:/# zpool status
pool: storage
state: ONLINE
scan: scrub repaired an in 701hSflm with 0 errors on Mon Jan 9 06:14:20 2023
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
vdc ONLINE 0 0 0
vdb ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
vde ONLINE 0 0 0
vdd ONLINE 0 0 0
raidz1-2 ONLINE 0 0 0
vdf ONLINE 0 0 0
vdg ONLINE 0 0 0
Now if we check To check available disk space
After the server restat we have :
root@bkp-01:~# zfs list
no datasets available
root@bkp-01:~# zpool status
no pools available
This backup error are showing on the logs as :
Backup [Test] /usr/bin/rsync -a "rsync://10.11.11.195/encrypted" "/storage/backups/Test/" failed with rc 11 rsync: mkdir "/storage/backups/Test" failed: No such file or directory (2) rsync error: error in file IO (code 11) at main.c(674) [Receiver=3.1.1]
We tried :
mount /storage/mount-revitalis
output
/storage/backups/revitalis /storage/mount-revitalis ecryptfs ecryptfs_cipher=aes,ecryptfs_key_bytes=32,key=passphrase:passfile=/root/tmp-pass-revitalis.txt,ecryptfs_passthrough=n,no_sig_cache,ecryptfs_enable_filename_crypto=n,noauto 0 0
passfile=/root/tmp-pass-revitalis.txt
And
root@bkp-01:~# mount /storage/mount-revitalis
Attempting to mount with the following options:
ecryptfs_unlink_sigs
ecryptfs_key_bytes=32
ecryptfs_cipher=aes
ecryptfs_sig=750ba1ab642008df
Error mounting eCryptfs: [-2] No such file or directory
Check your system logs; visit <http://ecryptfs.org/support.html>
Is this happen because of : Originally there was a ZFS pool "storage" with RAIDZ1 vdevs, After a reboot, the ZFS pool is missing entirely and This is causing backups to fail due to missing mount points?
Do Recreate the ZFS pool and Recreate the datasets will help here ?
Any help would be higly appriocate!