Score:0

Replacing failed drive using mdadm

do flag
RCO

I have a test server running Ubuntu Server 22.04.

The server boots from a small SSD. When setting it up I used mdadm to create a RAID1 with two 1 TB drives for data. After a few months, one of the drives failed. The mirror works. I was able to continue using the server while waiting for a new repalcement drive to arrive.

I followed some instructions in a video I found online and was able to replace the drive and rebuild the array.

One thing bothers me when viewing things using lsblk:

sdc 8:32 0 111.8G 0 disk
├─sdc1 8:33 0 1M 0 part
└─sdc2 8:34 0 111.8G 0 part /

sdd 8:48 0 931.5G 0 disk
├─sdd1 8:49 0 931.5G 0 part
└─md0 9:0 0 931.4G 0 raid1 /mnt/md0

sde 8:64 0 931.5G 0 disk
└─md0 9:0 0 931.4G 0 raid1 /mnt/md0

The original, still-working RAID member is sde. The new replacement is sdd. I followed the instructions in the replacement video in creating the partition sdd1. No such partition is showing for sde.

Running mdadm --detail /dev/md0 shows no errors:

/dev/md0:

         State : clean

Active Devices : 2

Number   Major   Minor   RaidDevice State
   2       8       48        0      active sync   /dev/sdd
   1       8       64        1      active sync   /dev/sde

Did I miss a step creating the original RAID1 array? Should I take some corrective measures or leave well enough alone?

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.