Score:0

How do I get my original RAID setup running again?

bq flag

I was trying to upgrade my raid 5 to larger drives. I was following a guide I found at:

https://rabexc.org/posts/mdadm-replace

But I could tell something was wrong before I added the larger drive. Two devices seemed to disappear. So I tried adding the original drive back. It looks like mdamd sees all 4, but I'm not sure what's wrong. All 4 of my original drives are back connected, and Ubuntu sees them. How do I get my original setup running again?

I'm running Ubuntu 20

4 drives are 4TB SATA physical disks

I think this output is likely to be the most helpful:

sudo mdadm --examine /dev/sd[a-z]1
/dev/sda1:
   MBR Magic : aa55
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e73fc900:63b29dac:59abec6d:a9ed6e02
           Name : ubuntu1:0  (local to host ubuntu1)
  Creation Time : Mon Feb  1 14:28:44 2021
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7813770895 (3725.90 GiB 4000.65 GB)
     Array Size : 11720655360 (11177.69 GiB 12001.95 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=655 sectors
          State : clean
    Device UUID : fc1484e3:9dd42927:050c3334:510f959c

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul  3 22:53:17 2023
  Bad Block Log : 512 entries available at offset 24 sectors
       Checksum : e373a9f0 - correct
         Events : 29259

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : A..A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e73fc900:63b29dac:59abec6d:a9ed6e02
           Name : ubuntu1:0  (local to host ubuntu1)
  Creation Time : Mon Feb  1 14:28:44 2021
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7813770895 (3725.90 GiB 4000.65 GB)
     Array Size : 11720655360 (11177.69 GiB 12001.95 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=655 sectors
          State : clean
    Device UUID : ef83a829:fb3b15d5:a8efbc06:6e1ce6ff

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul  3 22:48:11 2023
  Bad Block Log : 512 entries available at offset 24 sectors
       Checksum : d606fcd - correct
         Events : 29256

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e73fc900:63b29dac:59abec6d:a9ed6e02
           Name : ubuntu1:0  (local to host ubuntu1)
  Creation Time : Mon Feb  1 14:28:44 2021
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7813770895 (3725.90 GiB 4000.65 GB)
     Array Size : 11720655360 (11177.69 GiB 12001.95 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=655 sectors
          State : clean
    Device UUID : 33674b45:e1670e28:6357bac7:54feeea5

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul  3 22:45:08 2023
  Bad Block Log : 512 entries available at offset 24 sectors
       Checksum : e301c7e1 - correct
         Events : 29253

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e73fc900:63b29dac:59abec6d:a9ed6e02
           Name : ubuntu1:0  (local to host ubuntu1)
  Creation Time : Mon Feb  1 14:28:44 2021
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7813770895 (3725.90 GiB 4000.65 GB)
     Array Size : 11720655360 (11177.69 GiB 12001.95 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=655 sectors
          State : clean
    Device UUID : effdc94e:e1c09065:676782d9:11099abc

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Jul  3 22:53:17 2023
  Bad Block Log : 512 entries available at offset 24 sectors
       Checksum : 5274d44e - correct
         Events : 29259

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : A..A ('A' == active, '.' == missing, 'R' == replacing)

Update:

It looks like my drives are out of sync?

/dev/sdb1:
         Events : 29259
   Device Role : Active device 0
/dev/sdc1:
         Events : 29256
   Device Role : Active device 1
/dev/sdd1:
         Events : 29253
   Device Role : Active device 2
/dev/sde1:
         Events : 29259
   Device Role : Active device 3
Score:0
bq flag

I found this command below fixed the syncing issue. I mostly understand it, but also don't know what I don't know about this.

One learning was that I didn't have to have the /dev/sdx1 drives in any specific order. As a software RAID on Linux, it reads the identifying information from the drives, and uses that info to assemble it. I'm just passing which devices to consider.

mdadm --assemble --run --force --update=resync /dev/md127 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.