Score:0

How to bring up my mdadm RAID-5 array?

hk flag
  1. How can I bring up my mdadm RAID-5 array?
  2. How can I get these changes to persist?

I rebooted our server last night and found the raid array created about 8 months ago didn't come back up and I can't access my data. I've run a bunch of commands:

A couple months ago I added a new disk /dev/sdh to a RAID-5 array that was mounted to /srv/share following this. All seemed to work well, we had the extra space and have been using it - I'm actually not sure if we rebooted since then, apart from last night. The RAID-5 had originally been created under ubuntu 18.04 and is now being used by ubuntu 20.04

$ cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdf[3](S) sdb[1](S) sda[0](S)
      23441691144 blocks super 1.2
       
unused devices: <none>


$ lsblk | grep -v loop
NAME   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda      8:0    0   7.3T  0 disk  
└─md0    9:0    0  21.9T  0 raid5 
sdb      8:16   0   7.3T  0 disk  
└─md0    9:0    0  21.9T  0 raid5 
sdc      8:32   0   4.6T  0 disk  
└─sdc1   8:33   0   4.6T  0 part  /srv/datasets
sdd      8:48   0 298.1G  0 disk  
├─sdd1   8:49   0   190M  0 part  /boot/efi
└─sdd2   8:50   0 297.9G  0 part  /
sde      8:64   0   3.7T  0 disk  
└─sde1   8:65   0   3.7T  0 part  /srv
sdf      8:80   0   7.3T  0 disk  
└─md0    9:0    0  21.9T  0 raid5 
sdg      8:96   0   1.8T  0 disk  
├─sdg1   8:97   0   1.8T  0 part  /home
└─sdg2   8:98   0    47G  0 part  [SWAP]
sdh      8:112  0   7.3T  0 disk  
└─sdh1   8:113  0   7.3T  0 part  


$ sudo fdisk -l | grep sdh
Disk /dev/sdh: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
/dev/sdh1   2048 15628050431 15628048384  7.3T Linux filesystem



$ sudo mdadm -Db /dev/md0
INACTIVE-ARRAY /dev/md0 metadata=1.2 name=perception:0 UUID=c8004245:4e163594:65e30346:68ed2791
$ sudo mdadm -Db /dev/md/0
mdadm: cannot open /dev/md/0: No such file or directory



From /etc/mdadm/mdadm.conf:
ARRAY /dev/md/0  metadata=1.2 UUID=c8004245:4e163594:65e30346:68ed2791 name=perception:0



$ sudo mdadm --detail /dev/md0 
/dev/md0:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 3
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 3

              Name : perception:0
              UUID : c8004245:4e163594:65e30346:68ed2791
            Events : 91689

    Number   Major   Minor   RaidDevice

       -       8        0        -        /dev/sda
       -       8       80        -        /dev/sdf
       -       8       16        -        /dev/sdb


sudo mdadm --detail /dev/md/0 
mdadm: cannot open /dev/md/0: No such file or directory



mdadm --assemble --scan
  [does nothing]

$ blkid /dev/md0 [nothing]
$ blkid /dev/md/0 [nothing]

$ blkid | grep raid
/dev/sdb: UUID="c8004245-4e16-3594-65e3-034668ed2791" UUID_SUB="3fefdb86-4c6b-fb76-a35e-3a846075eb54" LABEL="perception:0" TYPE="linux_raid_member"
/dev/sdf: UUID="c8004245-4e16-3594-65e3-034668ed2791" UUID_SUB="d4a58f2c-bc8b-8fd0-6b22-63b047e09c13" LABEL="perception:0" TYPE="linux_raid_member"
/dev/sda: UUID="c8004245-4e16-3594-65e3-034668ed2791" UUID_SUB="afaea924-a15a-c5cf-f9a8-d73075201ff7" LABEL="perception:0" TYPE="linux_raid_member"

The relevant line in /etc/fstab is:

UUID=f495abb3-36e6-4782-8f5e-83c6d3fc78eb /srv/share     ext4    defaults        0       2


$ sudo mount -a
mount: /srv/share: can't find UUID=f495abb3-36e6-4782-8f5e-83c6d3fc78eb.

I try changing the UUID in fstab to c8004245:4e163594:65e30346:68ed2791 and then remount:

$ sudo mount -a
mount: /srv/share: can't find UUID=c8004245:4e163594:65e30346:68ed2791.

then I change to c8004245-4e16-3594-65e3-034668ed2791 and remount:

$ sudo mount -a
mount: /srv/share: /dev/sdb already mounted or mount point busy.

then I reboot with the new fstab entry: c8004245-4e16-3594-65e3-034668ed2791

but still no difference to any of the above commands^

I try changing mdadm.conf from:

ARRAY /dev/md/0  metadata=1.2 UUID=c8004245:4e163594:65e30346:68ed2791 name=perception:0

to:

ARRAY /dev/md0  metadata=1.2 UUID=c8004245:4e163594:65e30346:68ed2791 name=perception:0

=> no difference to anything?

try stopping and starting with -v

$ sudo mdadm --stop /dev/md0
mdadm: stopped /dev/md0

$ sudo mdadm --assemble --scan -v                                   
[ excluding all the random loop drive stuff ]
mdadm: /dev/sdb is identified as a member of /dev/md/0, slot 1.
mdadm: /dev/sdf is identified as a member of /dev/md/0, slot 2.
mdadm: /dev/sda is identified as a member of /dev/md/0, slot 0.
mdadm: added /dev/sdb to /dev/md/0 as 1
mdadm: added /dev/sdf to /dev/md/0 as 2
mdadm: no uptodate device for slot 3 of /dev/md/0
mdadm: added /dev/sda to /dev/md/0 as 0
mdadm: /dev/md/0 has been started with 3 drives (out of 4).


$ dmesg
[  988.616710] md/raid:md0: device sda operational as raid disk 0
[  988.616718] md/raid:md0: device sdf operational as raid disk 2
[  988.616721] md/raid:md0: device sdb operational as raid disk 1
[  988.618892] md/raid:md0: raid level 5 active with 3 out of 4 devices, algorithm 2
[  988.639345] md0: detected capacity change from 0 to 46883371008

cat /proc/mdstat now says that raid is active

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sda[0] sdf[3] sdb[1]
      23441685504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      bitmap: 0/59 pages [0KB], 65536KB chunk
unused devices: <none>

and mount says that /srv/share is successfully mounted

sudo mount -a -v
/                        : ignored
/boot/efi                : already mounted
none                     : ignored
/home                    : already mounted
/srv                     : already mounted
/srv/share               : successfully mounted
/srv/datasets            : already mounted

but /srv/share still doesn't show up in df -h

and I still can't see data in /srv/share

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             32G     0   32G   0% /dev
tmpfs           6.3G  2.5M  6.3G   1% /run
/dev/sdd2       293G   33G  245G  12% /
tmpfs            32G   96K   32G   1% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs            32G     0   32G   0% /sys/fs/cgroup
/dev/sde1       3.6T  455G  3.0T  14% /srv
/dev/sdd1       188M  5.2M  182M   3% /boot/efi
/dev/sdc1       4.6T  3.6T  768G  83% /srv/datasets
/dev/sdg1       1.8T  1.5T  164G  91% /home
Score:0
hk flag

The answer here https://unix.stackexchange.com/questions/210416/new-raid-array-will-not-auto-assemble-leads-to-boot-problems

helped

dpkg-reconfigure mdadm    # Choose "all" disks to start at boot
 update-initramfs -u       # Updates the existing initramfs
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.