Score:1

Recover Raid in Synology DSM 7

mq flag

After a DSM update, an issue with Synology branded cache drives caused my pool to crash and become unavailable. The 3 dots on the bottom only had the option to remove the pool, no option to enable read/write or repair the volume. I reached out to Synology support and after a week, they were able to get the pool back in read only mode so I could back up my data.

Having remote access enabled for this long left me uneasy and so after I started backing up my data the lack of cache and the improvements it offers started to irk me. I re-added the cache and that's where my real problems began. After a long day of troubleshooting I wanted to document how to recover the pool on DSM 7 in case you find yourself in a similar predicament.

After following advice on some raid forums, I marked the drives as failed and removed a drive from the array, re-add it, only to make my issues even worse ! This was the result:

Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sda5[6](S) sdb5[1](E) sdf5[5](E) sde5[4](E) sdd5[3](E) sdc5[2](E)
      87837992640 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [_EEEEE]

md1 : active raid1 sdg2[1]
      2097088 blocks [12/1] [_U__________]

md0 : active raid1 sdg1[1]
      2490176 blocks [12/1] [_U__________]

Looks bad however, mdadm -D /dev/md2 reported a much better results:

#mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Fri Aug  5 22:03:13 2022
     Raid Level : raid5
     Array Size : 87837992640 (83768.84 GiB 89946.10 GB)
  Used Dev Size : 17567598528 (16753.77 GiB 17989.22 GB)
   Raid Devices : 6
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Wed May 17 21:46:40 2023
          State : clean
 Active Devices : 5
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

           Name : File01:2
           UUID : 9ef80d24:68ea4c4f:3b281ebe:790302f5
         Events : 1454

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       21        1      faulty active sync   /dev/sdb5
       2       8       37        2      faulty active sync   /dev/sdc5
       3       8       53        3      faulty active sync   /dev/sdd5
       4       8       69        4      faulty active sync   /dev/sde5
       5       8       85        5      faulty active sync   /dev/sdf5

       6       8        5        -      spare   /dev/sda5

To make matters worse, DSM 7 changed the underlying code so commands like syno_poweroff_task -d no longer exists ! we'll I spent a considerable amount of time to to arrive at a solution, hopefully this helps someone in dire need.

So here is how to get your array rebuilding:

My array had 2 volumes that was prevent me from stopping the array & had to unmount them using the new synology packages

# synostgvolume --unmount -p /volume1
# synostgvolume --unmount -p /syno_vg_reserved_area 
# synovspace -all-unload

This will make the array look like this: inactive & NOT available is what your looking for

#lvm
lvm> lvscan
  inactive          '/dev/vg2/syno_vg_reserved_area' [12.00 MiB] inherit
  inactive          '/dev/vg2/volume_1' [81.80 TiB] inherit 

lvm> lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg2/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg2
  LV UUID                2E1szd-mdDP-4kkJ-YIcF-zh1B-t3t3-1hq1Ct
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              NOT available
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Path                /dev/vg2/volume_1
  LV Name                volume_1
  VG Name                vg2
  LV UUID                scFIlA-VoSt-KhC1-WP0u-DYBl-3IfY-Nrc4Cj
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              NOT available
  LV Size                81.80 TiB
  Current LE             21444608
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto 

You can now issue the assemble command to bring back the array, if all goes well your array will start rebuilding. Even though I messed up and didn't add the last drive :( to instantly regain my array.

# mdadm --stop /dev/md2
# mdadm --verbose    --create /dev/md2 --chunk=64 --level=5    --raid-devices=6 missing dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 
# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Wed May 17 22:36:29 2023
     Raid Level : raid5
     Array Size : 87837992640 (83768.84 GiB 89946.10 GB)
  Used Dev Size : 17567598528 (16753.77 GiB 17989.22 GB)
   Raid Devices : 6
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Wed May 17 22:36:29 2023
          State : clean, degraded
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : Morpheous:2  (local to host Morpheous)
           UUID : 96f5e08a:d64e6b15:97240cc1:54309926
         Events : 1

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       21        1      active sync   /dev/sdb5
       2       8       37        2      active sync   /dev/sdc5
       3       8       53        3      active sync   /dev/sdd5
       4       8       69        4      active sync   /dev/sde5
       5       8       85        5      active sync   /dev/sdf5

And finally re-add the spare, et viola !

mdadm --manage /dev/md2 --add /dev/sda5  
cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sda5[6] sdf5[5] sde5[4] sdd5[3] sdc5[2] sdb5[1]
      87837992640 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [_UUUUU]
      [>....................]  recovery =  0.0% (4341888/17567598528) finish=3980.8min speed=73531K/sec

md1 : active raid1 sdg2[1]
      2097088 blocks [12/1] [_U__________]

md0 : active raid1 sdg1[1]
      2490176 blocks [12/1] [_U__________]  

Hopefully this helps someone in the future.

djdomi avatar
za flag
please share the output of your solution as an answer due else it will stay inhere for ever as unresolved and pushed back to get a answer and furthermore after 24h you need to accept your answer as solution, thanks for sharing
br flag
PLEASE don't use R5 - it's danergous, storage-pro's avoid it massively.
Score:0
mq flag

In DSM 7 here's how to unmount the filesystem:

# synostgvolume --unmount -p /volume1
# synostgvolume --unmount -p /syno_vg_reserved_area 
# synovspace -all-unload

stop the array& rebuild the array:

# mdadm --stop /dev/md2
# mdadm --verbose    --create /dev/md2 --chunk=64 --level=5    --raid-devices=6 missing dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.