I had a clean raid1 which I tried to convert to a raid5 following this procedure:
https://dev.to/csgeek/converting-raid-1-to-raid-5-on-linux-file-systems-k73
After the step:
mdadm --create /dev/md0 --level=5 --raid-devices=2 /dev/sdX1 /dev/sdY1
which took almost all night, I ended up with an array that seems broken.
My understanding is, that this array is incomplete (since it just consists of 2 disks) but should be mountable. But when trying a mount I only get: mount: /mnt/temp: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error
btw: the array identifier changed to /dev/md127
after a reboot
cat /proc/mdstat
gives me
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md127 : active (auto-read-only) raid5 sdc1[0] sdd1[2]
3906884608 blocks super 1.2 level 5, 512k chunk, algorithm 2 [2/2] [UU]
bitmap: 0/30 pages [0KB], 65536KB chunk
unused devices: <none>
and mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Mon Apr 10 15:10:08 2023
Raid Level : raid5
Array Size : 3906884608 (3725.90 GiB 4000.65 GB)
Used Dev Size : 3906884608 (3725.90 GiB 4000.65 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Apr 11 02:29:34 2023
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : zentaur:0 (local to host zentaur)
UUID : 5a7b31a9:cbee2d37:fd0aed8a:8efafc98
Events : 7548
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
2 8 49 1 active sync /dev/sdd1
It seems that there is a problem with the partition table. When checking with
fdisk -l
it gives me
Disk /dev/md127: 3.64 TiB, 4000649838592 bytes, 7813769216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 524288 bytes
Next try was to get the superblocks with:
mke2fs -n /dev/md127
mke2fs 1.46.2 (28-Feb-2021)
Creating filesystem with 976721152 4k blocks and 244187136 inodes
Filesystem UUID: 5710f0da-129a-4a5c-8af9-18093a8feffd
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
But mounting the device with either of those didn't work either.
At this point I'm stuck
Is there anything I can do or try to get access to the data?
Or is it just a matter of adding the 3rd disk to the raid5 array?
Thanks a bunch guys!
Update
Thank god I managed to get access to (hopefully most of) the data.
I want to share the path I took for further reference.
First thing I tried was to use "foremost" directly on the raid array (/dev/md0) that had no partition table.
That was just partially successful since it ran verry slowly and produced mixed results. Many broken files, no dir structure, no filenames.
But there were files that came out correctly (content wise) - so I had hope.
I then start off on this page: https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID (chapter "Making the harddisks read-only using an overlay file").
Assuming, that the data could be there on the raw disks, I created overlays as explained.
One note here: In step 3 the page used blockdev --getsize ....
but in my version (2.36.1) that parameter is marked as "deprecated" an didn't work. I had to use blockdev --getsz
instead.
Having the overlays I fiddled around and ended up using "testdisk", let it analyze the overlay device /dev/mapper/sdX1
. After selecting "EFI GPT" partition type, it found a partition table I was able to use.
From here on the process was pretty straight forward. testdisk showed the disk's old file structure and I was able to copy the "lost" files to a backup hdd.
The process is still runnig but spot tests were pretty promising that most of the data can be recovered.