Score:0

/dev/md0 - wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error

dm flag

I recently had issues with a raid filesystem I created and had some bad luck. I started creating a raid0 one one disk with the intent to expend it afterwards. I had data on two drives and had only one spare to migrate from an ntfs windows to linux raid ext4.

I created the raid and copied data on it and everything was working according to plan. Afterwards I added the emptied disk to the raid which turned to raid4 instead of my inteded raid0. I thought that I would let it populate the drive and changed the raid level afterwards to continue my migration (It's personal data so availability and delays are not an issue).

However, I had a power outage and the debian machine crashed. I could not reboot the server as it was complaning about disks corruption (The system is not installed on the raid but apparently it still was corrupted).

I reinstalled the server and attempted to create a new raid array as raid0 on it which apparently worked. However, the mounting fails with the following error message :

mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.

Here are interesting inputs:

mdadm -D /dev/md0

dev/md0:
           Version : 1.2
     Creation Time : Wed Apr 12 18:46:15 2023
        Raid Level : raid0
        Array Size : 35156391936 (33527.75 GiB 36000.15 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Apr 12 18:46:15 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

            Layout : -unknown-
        Chunk Size : 512K

Consistency Policy : none

              Name : debianServer:0  (local to host debianServer)
              UUID : 694acc71:c3c4a319:fffd737c:97d86f63
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

mdadm --misc --examine /dev/sdb

/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 694acc71:c3c4a319:fffd737c:97d86f63
           Name : debianServer:0  (local to host debianServer)
  Creation Time : Wed Apr 12 18:46:15 2023
     Raid Level : raid0
   Raid Devices : 2

 Avail Dev Size : 35156391936 (16763.87 GiB 18000.07 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=0 sectors
          State : clean
    Device UUID : a6be797a:9864c7de:a2eaad96:d5960132

    Update Time : Wed Apr 12 18:46:15 2023
  Bad Block Log : 512 entries available at offset 8 sectors
       Checksum : 74880b2d - correct
         Events : 0

     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

mdadm --misc --examine /dev/sdc

/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 694acc71:c3c4a319:fffd737c:97d86f63
           Name : debianServer:0  (local to host debianServer)
  Creation Time : Wed Apr 12 18:46:15 2023
     Raid Level : raid0
   Raid Devices : 2

 Avail Dev Size : 35156391936 (16763.87 GiB 18000.07 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=0 sectors
          State : clean
    Device UUID : 4ff549bf:6b4012ef:6dea5c3c:5163822f

    Update Time : Wed Apr 12 18:46:15 2023
  Bad Block Log : 512 entries available at offset 8 sectors
       Checksum : 6cd2e9f1 - correct
         Events : 0

     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

mdadm --assemble --scan --verbose

mdadm: looking for devices for /dev/md/0
mdadm: No super block found on /dev/sda3 (Expected magic a92b4efc, got a4862c0a)
mdadm: no RAID superblock on /dev/sda3
mdadm: No super block found on /dev/sda2 (Expected magic a92b4efc, got 0000041f)
mdadm: no RAID superblock on /dev/sda2
mdadm: No super block found on /dev/sda1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sda1
mdadm: No super block found on /dev/sda (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sda
mdadm: /dev/sdc is identified as a member of /dev/md/0, slot 1.
mdadm: /dev/sdb is identified as a member of /dev/md/0, slot 0.
mdadm: added /dev/sdc to /dev/md/0 as 1
mdadm: added /dev/sdb to /dev/md/0 as 0
mdadm: /dev/md/0 has been started with 2 drives.

cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid0 sdb[0] sdc[1]
      35156391936 blocks super 1.2 512k chunks
      
unused devices: <none>

fdisk -l

Disk /dev/sdb: 16.37 TiB, 18000207937536 bytes, 35156656128 sectors
Disk model: ASM1153E        
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdc: 16.37 TiB, 18000207937536 bytes, 35156656128 sectors
Disk model: M000J-2TV103    
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md0: 32.74 TiB, 36000145342464 bytes, 70312783872 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes

e2fsck /dev/md0

e2fsck 1.46.2 (28-Feb-2021)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/md0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

dmesg | tail

[12414.953934] pcieport 0000:00:1c.0:   device [8086:4dbc] error status/mask=00000001/00002000
[12414.953938] pcieport 0000:00:1c.0:    [ 0] RxErr                 
[12437.084602] pcieport 0000:00:1c.0: AER: Corrected error received: 0000:00:1c.0
[12437.084619] pcieport 0000:00:1c.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID)
[12437.084625] pcieport 0000:00:1c.0:   device [8086:4dbc] error status/mask=00000001/00002000
[12437.084629] pcieport 0000:00:1c.0:    [ 0] RxErr                 
[12451.086860] pcieport 0000:00:1c.0: AER: Corrected error received: 0000:00:1c.0
[12451.086878] pcieport 0000:00:1c.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID)
[12451.086883] pcieport 0000:00:1c.0:   device [8086:4dbc] error status/mask=00000001/00002000
[12451.086888] pcieport 0000:00:1c.0:    [ 0] RxErr 

mdadm -E /dev/md0

mdadm: No md superblock detected on /dev/md0.

EDIT - Adding

mke2fs -nv /dev/md0

mke2fs 1.46.2 (28-Feb-2021)
mke2fs: Size of device (0x20bdefe00 blocks) /dev/md0 too big to be expressed
    in 32 bits using a blocksize of 4096.

My questions are :

  1. Can I restore the filesystem and recover the data to continue ?
  2. If not, can I recover the data in any other way ?

Thanks a lot for your help !

Cheers,

paladin avatar
id flag
What filesystem did you use on that raid device (before the crash)?
noje89 avatar
dm flag
ext4 . But now file -s /dev/md0 returns /dev/md0: data
paladin avatar
id flag
If able, make a backup of your RAID. Use the command `mkfs.ext4 -nv /dev/md0` (the `-n` parameter is telling `mkfs.ext4` to not create a filesystem, it's just simulating it). Look for the output line `Superblock backups stored on blocks:` and remember the super block numbers, for example `32768`, `98304` and etc.. After that try to repair the filesystem using `e2fsck -b SUPERBLOCK_NUMBER /dev/md0`, start with the lowest super block number and hope that there is any super block left. If you are lucky, you find a working super block and your filesystem can be restored, but probably not for 100%.
noje89 avatar
dm flag
I added the infos from mke2fs -nv /dev/md0. Unfortunately it returns Size of device (0x20bdefe00 blocks) /dev/md0 too big to be expressed in 32 bits using a blocksize of 4096. I just saw it's probably because I'm using 2 16to drives. Could I remove this raid and recreate one with only one disk to attemps and rescue the data ?
paladin avatar
id flag
Do you know the original exact size (byte exact) of your former ext4 filesystem?
noje89 avatar
dm flag
No... I created it on the raid0 with one disk without the crash and I didn't take not of this.
paladin avatar
id flag
Another question, just to make sure, was there a partition table on `/dev/md0` ?
noje89 avatar
dm flag
Not sure, I just created it with mkfs.ext4 -F /dev/md0
paladin avatar
id flag
That's without a partition table. You might try `mkfs.ext4 -nv /dev/md0 16t`. This should generate a simulated output for a 16TiB sized filesystem. Use that output as previous explained. But this might not be as helpful as when simulating with the exact filesystem size. But that's all you might do for now.
noje89 avatar
dm flag
Thank you for your help. In the end, I managed to run mkfs.ext4 with some values (it got killed several times due to lack of RAM/CPU) and I could mount the partition. However, all the data got scrapped. Fortunately I have some (older) backups and still access to original data (I'll have to digitize some again but still should not be major loss). Thanks again for your kind help !
paladin avatar
id flag
I may give you the advice to use another kind of RAID in future. Using filesystem BTRFS or ZFS RAID functionality. I recommend using BTRFS filesystem. But before doing so, you should do some research. Also, RAID5 or RAID6 should not be used on drives which are very large and relatively slow, like HDD are. Better only use RAID1. One hint, BTRFS supports also RAID5, but don't use it, as it's currently experimental and not stable. Filesystem level RAID is more efficient, stable and secure than block level RAID mechanism like MDADM.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.