Score:2

Disk pair not usable after moving from hardware RAID (Windows) to software RAID (Linux)

cn flag

A long time ago I created a RAID1 pair from the boot menu of a JMicron JMB363 card. All was working fine under Windows and the RAID1 array was recognized as a single disk. I don't remember exactly how may partitions the disk had but for sure there was one NTFS partition which was the only used when the mirror was working on Windows.

I recently moved the OS from Windows to Linux and I'm not able to use the NTFS partition anymore. I can see that the two disks are identical from the partitions point of view as expected:

#lsblk --fs
...
sdb     linux_raid_member 1.2   MyRAIDLabel <uuid>                
└─md127                                                                                              
sdc     linux_raid_member 1.2   MyRAIDLabel <uuid>                
└─md127    

The fdisk command shows the NTFS partition in the pair of disks

#fdisk -l
Disk /dev/sdc: 465.76 GiB, 500107862016 bytes, 976773168 sectors
...
Device     Boot Start       End   Sectors  Size Id Type
/dev/sdc1        2048 204802047 204800000 97.7G  7 HPFS/NTFS/exFAT


Disk /dev/sdb: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: ST500DM002-1BD14
...
Device     Boot Start       End   Sectors  Size Id Type
/dev/sdb1        2048 204802047 204800000 97.7G  7 HPFS/NTFS/exFAT

However strange enough there is no /dev/sdb1 or /dev/sdc1 in the filesytem but only /dev/sdb and /dev/sdc. So the NTFS partition seems like "hidden".

Also the md device is shown

...

Disk /dev/md127: 465.64 GiB, 499973619712 bytes, 976510976 sectors
...

However as far as I understand md is used to achieve RAID with multiple disks through software. I don't understand if the JMicron card handled the synchronization between the two disks or if it was handled by the Windows driver. So the question is:

Do I need to use Linux md also when the card handle the RAID array ?

If I try to mount /dev/md127 i get an error:

#mount -t auto -o loop /dev/md127 /mnt/raid
mount: /mnt/raid: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.  

If I try to mount /dev/sdb or /dev/sdc I get another error:

mount: /mnt/raid: unknown filesystem type 'linux_raid_member'

Another unexplained thing is that if I read the first 16 bytes from the two disk devices I get a different result than the md one:

# hexdump -C -n 16 /dev/md127
00000000  20 69 6e 20 6f 72 67 2e  61 70 61 63 68 65 2e 68  | in org.apache.h|
00000010
# hexdump -C -n 16 /dev/sdb
00000000  33 c0 8e d0 bc 00 7c 8e  c0 8e d8 be 00 7c bf 00  |3.....|......|..|
00000010
# hexdump -C -n 16 /dev/sdc
00000000  33 c0 8e d0 bc 00 7c 8e  c0 8e d8 be 00 7c bf 00  |3.....|......|..|
00000010

It seem like the md device starts with a file content.
How can it possible ?
How can I recover the data in the array ?

Additional md data:

Here is some additional information reported in the mdstat file:

#cat /proc/mdstat 
Personalities : [raid1] 
md127 : active (auto-read-only) raid1 sdb[1] sdc[0]
      488255488 blocks super 1.2 [2/2] [UU]
      bitmap: 0/4 pages [0KB], 65536KB chunk

unused devices: <none>

Here is the output of mdadm:

mdadm --detail /dev/md127 
/dev/md127:
           Version : 1.2
     Creation Time : Sun Sep  7 17:57:53 2014
        Raid Level : raid1
        Array Size : 488255488 (465.64 GiB 499.97 GB)
     Used Dev Size : 488255488 (465.64 GiB 499.97 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Feb 25 20:21:09 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : MirrorRAID:0
              UUID : 95f02fbb:71f61cca:e24e932f:2dcfc5e0
            Events : 150294

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8       16        1      active sync   /dev/sdb


#mdadm --examine /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 95f02fbb:71f61cca:e24e932f:2dcfc5e0
           Name : MirrorRAID:0
  Creation Time : Sun Sep  7 17:57:53 2014
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 976511024 sectors (465.64 GiB 499.97 GB)
     Array Size : 488255488 KiB (465.64 GiB 499.97 GB)
  Used Dev Size : 976510976 sectors (465.64 GiB 499.97 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=48 sectors
          State : clean
    Device UUID : 934abcd2:ead8a42a:ca23dd27:cd380990

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Feb 25 20:21:09 2023
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 87fc2008 - correct
         Events : 150294


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
#mdadm --examine /dev/sdc
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 95f02fbb:71f61cca:e24e932f:2dcfc5e0
           Name : MirrorRAID:0
  Creation Time : Sun Sep  7 17:57:53 2014
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 976511024 sectors (465.64 GiB 499.97 GB)
     Array Size : 488255488 KiB (465.64 GiB 499.97 GB)
  Used Dev Size : 976510976 sectors (465.64 GiB 499.97 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=48 sectors
          State : clean
    Device UUID : ce4a6223:22a98469:8486de1b:16f34071

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Feb 25 20:21:09 2023
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : ecbb0d7c - correct
         Events : 150294


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
Nikita Kipriyanov avatar
za flag
It is very likely partitions should exist on your `md127` or some of its subdevice (`md126` and so on, it goes in a decreasing order usually). Those are usually named like `md126p1` and so on, and that should be the device holding your file system. So please instead of hexdump, show us the following: `cat /proc/mdstat`, `mdadm --detail /dev/md127` (and for other devices that are shown in mdstat), `mdadm --examine /edv/sdb` and `sdc`, also *full* `lsblk` output and `blkid` is likely to be useful.
Nikita Kipriyanov avatar
za flag
Problem might be also is that you're using LDM ("dynamic discs") or Storage Spaces in your Windows, which is not something Linux users commonly see, and I am not sure if Linux supports the latter at all. In this case, you can dump `md127` to some single device to be put into Windows, or forward it into VM running Windows to recover data.
cn flag
@NikitaKipriyanov I have no devices other than `md127` and `/dev/md/MirrorRAID:0` which is a link to the previous device. I only removed `sda` from the output of `lsblk` but the relevant devices are all there. I update the post with the additional information.
Nikita Kipriyanov avatar
za flag
This is certainly **NOT** a RAID that was "created from the boot menu of a JMicron JMB363 card". This is Linux software MD RAID. Are you sure all you did is you moved it into Linux system and *did nothing else* with it?
cn flag
This is how the OS is seeing the disks but the two disk were running in RAID1 way before (years actually) they were moved to Linux. I don't know if `md` performed some operation on the disk automatically but that's it. NOTE: They are still attached through the JMicron PCIe to SATA card.
Nikita Kipriyanov avatar
za flag
Very interesting, metadata block creation date is 2014. Maybe it has two metadata blocks, both MD and DDF? It is possible and this way the data will be screwed.
cn flag
I see these commands in the history: `mdadm --remove /dev/md127` and `mdadm --assemble --scan --verbose`. I also tried to remove and add again the two disk to the `/dev/md127` device so probably this was the reason why the metadata was overwritten.
Nikita Kipriyanov avatar
za flag
At this point I'd make a dump of some drive just in case. You are probably already screwed, but still there is a good possibility to recover.
cn flag
Ok thanks @NikitaKipriyanov I will try with to attach one of the disk to a Windows VM just to see if the data is still there somehow. By the way 2014 is really strange because the JMicron card was bought not before 2018. Maybe the disks are olden but the array was set up on 2018 or later.
stark avatar
mu flag
Linux has dmraid to use the device created by your RAID card. If you created a software md RAID then it overwrote the disks.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.