Score:0

mdadm raid 1 array can't mount after resync

us flag

I have a raid one device that works fine with one drive and during the resync after I add the second, but once I reboot I can no longer mount the filesystem. With only 1 drive in the array (i can remove either one with the same result) I get a /dev/mdxxpxx device that mounts the filesystem fine. When I add the second drive (while still mounted) it will resync without an issue and I end up with an array marked clean. But after reboot I no longer see the mdxxpxx device and I am unable to mount the fileystem any longer:

root@Watchme:~# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Nov 14 12:06:40 2021
        Raid Level : raid1
        Array Size : 23439733760 (21.83 TiB 24.00 TB)
     Used Dev Size : 23439733760 (21.83 TiB 24.00 TB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Nov 14 12:06:40 2021
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : xxxxx:0  (local to host xxx)
              UUID : dde5b8f6:fe3a89e5:f281c9ef:c4433874
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

root@Watchme:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[1] sda1[0]
      23439733760 blocks super 1.2 [2/2] [UU]
      bitmap: 0/175 pages [0KB], 65536KB chunk

unused devices: <none>

root@Watchme:~# mdadm --examine /dev/sda1 /dev/sdb1
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : dde5b8f6:fe3a89e5:f281c9ef:c4433874
           Name : Watchme.wachtveitl.xyz:0  (local to host Watchme.wachtveitl.xyz)
  Creation Time : Sun Nov 14 12:06:40 2021
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 46879467520 sectors (21.83 TiB 24.00 TB)
     Array Size : 23439733760 KiB (21.83 TiB 24.00 TB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264080 sectors, after=0 sectors
          State : clean
    Device UUID : 55a52a97:8a7019d0:eab789f9:eb18d6f0

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Nov 14 12:06:40 2021
  Bad Block Log : 512 entries available at offset 96 sectors
       Checksum : dec6072c - correct
         Events : 0


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : dde5b8f6:fe3a89e5:f281c9ef:c4433874
           Name : Watchme.wachtveitl.xyz:0  (local to host Watchme.wachtveitl.xyz)
  Creation Time : Sun Nov 14 12:06:40 2021
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 46879467520 sectors (21.83 TiB 24.00 TB)
     Array Size : 23439733760 KiB (21.83 TiB 24.00 TB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264080 sectors, after=0 sectors
          State : clean
    Device UUID : 06ed8c71:b26928da:5b59adf0:5550c044

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Nov 14 12:06:40 2021
  Bad Block Log : 512 entries available at offset 96 sectors
       Checksum : e4520e1 - correct
         Events : 0


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

root@Watchme:~# blkid|grep -v loop
/dev/sda1: UUID="dde5b8f6-fe3a-89e5-f281-c9efc4433874" UUID_SUB="55a52a97-8a70-19d0-eab7-89f9eb18d6f0" LABEL="Watchme.wachtveitl.xyz:0" TYPE="linux_raid_member" PARTUUID="98b9d7e4-1ffe-b34e-8657-e1171fee9eea"
/dev/sdb1: UUID="dde5b8f6-fe3a-89e5-f281-c9efc4433874" UUID_SUB="06ed8c71-b269-28da-5b59-adf05550c044" LABEL="Watchme.wachtveitl.xyz:0" TYPE="linux_raid_member" PARTUUID="966dc93b-3b1b-49b8-8bc3-cda98819cf2c"
/dev/sdc1: UUID="2371e92a-ce66-4367-af04-82d671001eac" TYPE="swap" PARTUUID="6a4e6d9f-01"
/dev/sdc5: UUID="909bf5e9-f558-4d26-ac96-ac8f7ff952c5" UUID_SUB="b291a607-c28f-4eab-9ae9-6621f9929cd2" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="6a4e6d9f-05"
/dev/sdd1: UUID="054a11be-eb31-480f-b0d9-9de0c9809d8e" UUID_SUB="e61e0ac0-d008-4f47-8517-a854f42dd9cb" BLOCK_SIZE="4096" TYPE="btrfs" PARTLABEL="Test Partition" PARTUUID="086e0cc9-2710-0000-50eb-806e6f6e6963"
/dev/sdd2: UUID="1BFA-08CE" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="Test Partition" PARTUUID="07376838-2710-0000-50eb-806e6f6e6963"
/dev/sdd3: LABEL="Win10" BLOCK_SIZE="512" UUID="47F6009C5191E06C" TYPE="ntfs" PARTLABEL="Test Partition" PARTUUID="07374128-2710-0000-50eb-806e6f6e6963"
/dev/sdd4: UUID="A583-7A72" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="Test Partition" PARTUUID="086e33d9-2710-0000-50eb-806e6f6e6963"
/dev/md0: PTTYPE="PMBR"

root@Watchme:~# mount /dev/md0 /media_lv
mount: /media_lv: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.

root@Watchme:~# mount --uuid=dde5b8f6-fe3a-89e5-f281-c9efc4433874 /media_lv
mount: /media_lv: unknown filesystem type 'linux_raid_member'.

root@Watchme:~# mount --uuid=dde5b8f6-fe3a-89e5-f281-c9efc4433874 -t ext4 /media_lv
mount: /media_lv: /dev/sda1 already mounted or mount point busy.

I have spent hours scouring the internet for solutions, I can remove either drive and the mdxxpxx device comes back and I can mount the filesystem and all the data is fine. Resyncing is a long process obviously due to the size of the filesystem it takes about 10 hours to complete. I have tried stopping the array and creating it again with the --assume-clean option (without mounting in between obviously) several times, and have also simply added the second drive in as a hot spare and then adding back to the array and letting it resync which works fine and I can use the data without issues until I reboot then I am stuck again.

I am at a loss of where to go from here and any assistance is greatly appreciated.

David

Score:0
us flag

Still no idea what happened but basically I just started from scratch after removing everything including the partitions on the disks and now I get a UUID for /dev/md0 and can mount umount it just fine. When I originally created the array there was a filesystem on the first disk and I think that MAY have screwed me up because I didn't create a filesystem onto /dev/md0 after since the existing filesystem showed up and could be mounted on the md0p1 device.

Just posting my findings in case it helps someone in the future.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.