Score:0

mdadm RAID10 device ordering not sequential

ke flag

After a power outage one of the drives in my two RAID10s stopped working and put the arrays into 'degraded' state. The drive had two partitions, one which I think was for a swap raid and the other was for the Ubuntu raid. However, after moving things off the Ubuntu RAID to another disk, I recently discovered somehow one of the partitions of the missing disk got restored and re-added to the array. Thinking it might be okay now, I used the following command to re-added the remaining partition:

sudo mdadm --manage --add /dev/md0 /dev/sdc1

And it 'works' but it got the wrong number.

sudo cat /proc/mdstat
Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
md1 : active raid10 sdd2[3] sdc2[2] sdb2[1] sda2[0]
      1890760704 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 0/15 pages [0KB], 65536KB chunk

md0 : active raid10 sdc1[4] sdd1[3] sdb1[1] sda1[0]
      62466048 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

unused devices: <none>

/dev/sdc is the formerly missing drive. Note that sdc1 and sdc2 have numbers 4 and 2 in raids md0 and md1 respectively. It used to be number 2 for both before the failure. However, things seem to be 'working' again. Here's some other outputs comparing the two raids.

sudo mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Sep 28 07:19:50 2017
        Raid Level : raid10
        Array Size : 62466048 (59.57 GiB 63.97 GB)
     Used Dev Size : 31233024 (29.79 GiB 31.98 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Sep  2 22:58:51 2023
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : Steward:0
              UUID : 587d0912:cbf49281:ed0bd4a2:c1a0102a
            Events : 567

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync set-A   /dev/sda1
       1       8       17        1      active sync set-B   /dev/sdb1
       4       8       33        2      active sync set-A   /dev/sdc1
       3       8       49        3      active sync set-B   /dev/sdd1

Compared to md1:

sudo mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Thu Sep 28 07:20:13 2017
        Raid Level : raid10
        Array Size : 1890760704 (1803.17 GiB 1936.14 GB)
     Used Dev Size : 945380352 (901.58 GiB 968.07 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Sep  2 22:34:13 2023
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : Steward:1
              UUID : c2ee95cd:b36cdadf:43b68247:674d01f9
            Events : 7585156

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync set-A   /dev/sda2
       1       8       18        1      active sync set-B   /dev/sdb2
       2       8       34        2      active sync set-A   /dev/sdc2
       3       8       50        3      active sync set-B   /dev/sdd2

Have I done something wrong? md1 has something about "Intent bitmap" that md0 doesn't have. I suspect the numbering difference means that some garbage of some sort has been left behind from the original error. I'd like to clean it up if that's the case. Please lend your advice.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.