One disk in a two-disks level 1 array failed. I added another disk to the array and resynchronization completed successfully. While the synchronization was running, I removed the failed volume and the array filesystem vanished.
So I have two questions:
What happened when I removed the failed array that caused the existing filesystem to vanish?
Is it possible to recover the filesystem on the array?
Current status:
# mdadm -Es
ARRAY /dev/md/2 metadata=1.2 UUID=a58fd446:5acce560:16946592:61a8ef7e name=ckhb02:2
# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdf1[2] sdd[0]
10737418240 blocks super 1.2 [2/2] [UU]
bitmap: 0/80 pages [0KB], 65536KB chunk
unused devices: <none>
# mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sun Jun 28 16:55:00 2020
Raid Level : raid1
Array Size : 10737418240 (10240.00 GiB 10995.12 GB)
Used Dev Size : 10737418240 (10240.00 GiB 10995.12 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Aug 17 13:20:32 2021
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : ckhb02:2 (local to host ckhb02)
UUID : a58fd446:5acce560:16946592:61a8ef7e
Events : 175334
Number Major Minor RaidDevice State
0 8 48 0 active sync /dev/sdd
2 8 81 1 active sync /dev/sdf1
/bin/ls -lF /dev/disk/by-uuid
yields no volume associated with the array.