Score:0

Growing a RAID-10 on Synology NAS?

us flag

I was hoping to add two additional disks to my Synology NAS. Currently it is configured as a four disk RAID-10, no SHR, BTRFS, single volume system. Unfortunately extending a RAID-10 is not possible via GUI but it is possible according to the shipped mdm utility.

I took the following steps:

  1. added two more hard disks to the system
  2. formatted the hard disks identically to the existing disks
  3. added the disks/partitions to the 3 raid arrays
sudo mdadm /dev/mdX --add /dev/sata5pX /dev/sata6pX (X equals 1,2,3)
  1. grow the raid
sudo mdadm --grow /dev/mdX --raid-devices=6 (X equals 1,2,3)

This is the point where I'm a bit lost. Somehow /dev/md2 is mapped to /dev/mapper/cachedev_0 but I don't know how (nothing in fstab, nothing in mount). I found the following conf file /etc/space/vspace_layer.conf this must be some Syno internal thing? There are also some JSONs in /etc/space/space_table which are created on boot I guess.

However, BTRFS still sees the disk size as the original size and running btrfs filesystem resize max /dev/mapper/cachedev_0 does not change anything. So I must be missing a step.

Could this be: mdadm /dev/md2 --grow --size=max?

Any help is appreciated.

Score:0
me flag

I'm running also a Synology NAS (DS916+, 4bay-NAS) and upgraded my disks to extend storage.

Before that I did an upgrade from DSM6 to the latest DSM7, as I have heard RAID rebuilds are faster under DSM7.

That was a process first, due to having PLEX installed as a native Synology package.

So I did that first, after that let's go to the real work of replacing disks and extending the storage.

Before the upgrade: 4x 4TB in RAID 10 with BTRFS, single volume (so 8TB usable space, 8TB parity), around 5.4TB was already used.

First before starting the process, I made a backup of the complete volume with Hyper Backup (I already did that under DSM6, but created another one with DSM7) to a single 8TB external HDD.

My upgrade path was to end up with 4x 8TB, so 16TB usable space and 16TB as parity.

Keep in mind my NAS allows hot-swapping. Also as all my disks were the same size, the order is irrelevant, if you have multiple sized disk, replace every time the current smallest disk size with a larger one.

1.) Went into the storage manager, choose disk 1 under the HDD/SSD menu, choose to deactivate the disk.

2.) Beeping starts, RAID is degraded, confirmed that disk 1 was deactivated physicaly (LED for disk one was orange instead of green), remove disk from NAS physically.

3.) Remove disk from casing, replace the 4TB with the new 8TB, slide it in back into slot 1.

4.) Notification pops up, that a new, not yet initialized disk is available (should be also listed under the HDD/SSD menu). Clicked on that blue text of the notification.

5.) Confirmed that I want to initialize that disk and add it to the RAID and repair it. Should be then the first option of "repairing the storage pool" or similar.

6.) Wait until finished (see % status in the storage pool overview).

7.) If disk 1 is finished with the RAID rebuild/resync, repeat the same process for disk 2, and then disk3, one disk at a time.

It took me about 6 hours per disk, all my docker instances were turned off while doing that, as well my Plex server was stopped.

Also you can set the speed of the rebuild to slow (lower), fast (resync faster) or custom in the storage pool settings (click on storage Pool, there is button with "Global Settngs").

Fast was about 100MB/s, I set it to 300 minimum, 600 maximum, it maxed out with around 200MB/s (not sure why a minimum setting even exits, but anyways...)

8.) Disk 4 was like disk 1, 2 and 3 regarding initializing them and using them to repair the storage pool, however before starting the repair, it showed the max capacity after the repair to be 14.xx TB instead of 7.3TB.

The usable space can only be as large as your "smallest disk capacity times 2". So for that has been each time 4TB2. But as we have now replaced already 3 disks with a larger disk, adding the 4th and last disk would mean the usable size is 8TB2, so the Storage Manager will automatically also resize and extend the volume size, which it did in my case.

As far as I remember from what I've read, if the automatic expansion did not happen, it should be available, when you have all 4 disks replaced and upgraded, if one disk is not yet upgraded.

The last disk took a bit more the 6 hours (maybe something around 7 to 8), as it not only rebuilds the RAID, but also additionally then extends it.

The same should so far also apply to DSM6, however it can be maybe a bit slower as well the menu points will have different naming. The process is the same, you can only extend the storage when all disks are replaced and the RAID rebuild is done.

I hope that helps maybe, even I could not help with the mdadm.

TylerDurden avatar
us flag
Thanks for your input but that is not the same scenario. Replacing disks with bigger ones should indeed work fine. I'm trying to add additional disks.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.