Score:0

Problems converting a raid1 to raid5

cl flag

I had a clean raid1 which I tried to convert to a raid5 following this procedure: https://dev.to/csgeek/converting-raid-1-to-raid-5-on-linux-file-systems-k73

After the step: mdadm --create /dev/md0 --level=5 --raid-devices=2 /dev/sdX1 /dev/sdY1 which took almost all night, I ended up with an array that seems broken.

My understanding is, that this array is incomplete (since it just consists of 2 disks) but should be mountable. But when trying a mount I only get: mount: /mnt/temp: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error

btw: the array identifier changed to /dev/md127 after a reboot

cat /proc/mdstat gives me

Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md127 : active (auto-read-only) raid5 sdc1[0] sdd1[2]
      3906884608 blocks super 1.2 level 5, 512k chunk, algorithm 2 [2/2] [UU]
      bitmap: 0/30 pages [0KB], 65536KB chunk

unused devices: <none>

and mdadm --detail /dev/md127

/dev/md127:
           Version : 1.2
     Creation Time : Mon Apr 10 15:10:08 2023
        Raid Level : raid5
        Array Size : 3906884608 (3725.90 GiB 4000.65 GB)
     Used Dev Size : 3906884608 (3725.90 GiB 4000.65 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue Apr 11 02:29:34 2023
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : zentaur:0  (local to host zentaur)
              UUID : 5a7b31a9:cbee2d37:fd0aed8a:8efafc98
            Events : 7548

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       2       8       49        1      active sync   /dev/sdd1

It seems that there is a problem with the partition table. When checking with

fdisk -l

it gives me

Disk /dev/md127: 3.64 TiB, 4000649838592 bytes, 7813769216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 524288 bytes

Next try was to get the superblocks with: mke2fs -n /dev/md127

mke2fs 1.46.2 (28-Feb-2021)
Creating filesystem with 976721152 4k blocks and 244187136 inodes
Filesystem UUID: 5710f0da-129a-4a5c-8af9-18093a8feffd
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848, 512000000, 550731776, 644972544

But mounting the device with either of those didn't work either.

At this point I'm stuck Is there anything I can do or try to get access to the data? Or is it just a matter of adding the 3rd disk to the raid5 array?

Thanks a bunch guys!

Update

Thank god I managed to get access to (hopefully most of) the data. I want to share the path I took for further reference.

First thing I tried was to use "foremost" directly on the raid array (/dev/md0) that had no partition table. That was just partially successful since it ran verry slowly and produced mixed results. Many broken files, no dir structure, no filenames. But there were files that came out correctly (content wise) - so I had hope.

I then start off on this page: https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID (chapter "Making the harddisks read-only using an overlay file"). Assuming, that the data could be there on the raw disks, I created overlays as explained. One note here: In step 3 the page used blockdev --getsize .... but in my version (2.36.1) that parameter is marked as "deprecated" an didn't work. I had to use blockdev --getsz instead.

Having the overlays I fiddled around and ended up using "testdisk", let it analyze the overlay device /dev/mapper/sdX1. After selecting "EFI GPT" partition type, it found a partition table I was able to use. From here on the process was pretty straight forward. testdisk showed the disk's old file structure and I was able to copy the "lost" files to a backup hdd. The process is still runnig but spot tests were pretty promising that most of the data can be recovered.

br flag
Don't use RAID 5, especially with larger disks - we have someone come here at least once a month asking us to help them fix their R5 array, it's dangerous, it should be banned, friends don't let friends use R5, it's been 'dead' for well over a decade. Please only use R1/10 and R6/60.
in flag
Great that you were able to recover the data, but please provide your solution as an answer, not in the question.
Score:1
ca flag

With the command

mdadm --create /dev/md0 --level=5 --raid-devices=2 /dev/sdX1 /dev/sdY1

you basically nuked any data on the drive because you told mdadm to create a new array, rather than re-shaping the existing one.

What you really wanted was mdadm --grow. From mdadm man page:

Grow   Grow (or shrink) an array, or otherwise reshape it in some
              way.  Currently supported growth options including
              changing the active size of component devices and changing
              the number of active devices in Linear and RAID levels
              0/1/4/5/6, changing the RAID level between 0, 1, 5, and 6,
              and between 0 and 10, changing the chunk size and layout
              for RAID 0,4,5,6,10 as well as adding or removing a write-
              intent bitmap and changing the array's consistency policy.

In general, never follow some random guide without fully understand the implication of the suggested commands.

EDIT: it seems that recent version of mdadm does the right thing to protect the user from such errors: trying on a test array, I can indeed create a raid5 array over a previous raid1 one without losing the filesystem. However it is not anything I would rely on, as any change on bitmap/chunk/metadata version can corrupt your data - the correct tool to reshape the array remain the grow subcommand.

Moritz avatar
cl flag
Thanks for answering. I guess I leaned my lesson the hard way. Is there any chance to recover the data or has it been overwritten irreversibly?
shodanshok avatar
ca flag
Well, it seems that recent version of `mdadm` does the right thing to protect the user from such errors: trying on a test array, I can indeed create a raid5 array over a previous raid1 one without losing the filesystem. Can you trying mounting specifying the array itself - ie: `mount /dev/md127 /mnt/temp`? If failing, please provide `dmesg` output (the last relevant lines only) and the output of `blkid /dev/sdc1 /dev/sdd1 /dev/md127`.
Moritz avatar
cl flag
When trying to mount I got an error: ```mount: /mnt/temp: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error``` dmsg sais: ```[63020.241199] F2FS-fs (md0): Magic Mismatch, valid(0xf2f52010) - read(0x1fe80000) [63020.241226] F2FS-fs (md0): Can't find valid F2FS filesystem in 1th superblock [63020.241389] F2FS-fs (md0): Magic Mismatch, valid(0xf2f52010) - read(0x20080000) [63020.241403] F2FS-fs (md0): Can't find valid F2FS filesystem in 2th superblock ```
Moritz avatar
cl flag
(relevant) output of blkid: ```/dev/sdc1: UUID="5a7b31a9-cbee-2d37-fd0a-ed8a8efafc98" UUID_SUB="a2e1280c-bb67-99cd-eb84-981a31ea5c4d" LABEL="zentaur:0" TYPE="linux_raid_member" PARTUUID="bd482a15-a167-4a1b-9cb8-92624a49df76" /dev/sdd1: UUID="5a7b31a9-cbee-2d37-fd0a-ed8a8efafc98" UUID_SUB="259361d1-6b8b-3fb0-36a4-950fa131257a" LABEL="zentaur:0" TYPE="linux_raid_member" PARTUUID="22f7796c-5198-419c-9519-7e7101c4947a" ``` There is no entry for /dev/md127
shodanshok avatar
ca flag
What is the array device? You mentioned `md127`, and the mount error reported it, but then `dmesg` wrote about a F2FS filesystem on `md0`, and you had no entry for `md127` when running `blkid`. So the array is `md127` or `md0`? What filesystem you had on the array? Can you add the output of `lsblk` (please include it in the main question rather than as comment, otherwise its output will be mangled and difficult to read).
Moritz avatar
cl flag
Hey @shodanshok, the assigned raid ID changed from time to time, no idea why. However, I managed to find a way to recover the data. Will update the main question asap.
Score:0
cl flag

Basically the mdadm --create ... command left the origilan data (at least on one disk) intact. I updated my question with the steps I took to find data I could recover

br flag
Great, now stop using using R5
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.