I'm running also a Synology NAS (DS916+, 4bay-NAS) and upgraded my disks to extend storage.
Before that I did an upgrade from DSM6 to the latest DSM7, as I have heard RAID rebuilds are faster under DSM7.
That was a process first, due to having PLEX installed as a native Synology package.
So I did that first, after that let's go to the real work of replacing disks and extending the storage.
Before the upgrade:
4x 4TB in RAID 10 with BTRFS, single volume (so 8TB usable space, 8TB parity), around 5.4TB was already used.
First before starting the process, I made a backup of the complete volume with Hyper Backup (I already did that under DSM6, but created another one with DSM7) to a single 8TB external HDD.
My upgrade path was to end up with 4x 8TB, so 16TB usable space and 16TB as parity.
Keep in mind my NAS allows hot-swapping. Also as all my disks were the same size, the order is irrelevant, if you have multiple sized disk, replace every time the current smallest disk size with a larger one.
1.) Went into the storage manager, choose disk 1 under the HDD/SSD menu, choose to deactivate the disk.
2.) Beeping starts, RAID is degraded, confirmed that disk 1 was deactivated physicaly (LED for disk one was orange instead of green), remove disk from NAS physically.
3.) Remove disk from casing, replace the 4TB with the new 8TB, slide it in back into slot 1.
4.) Notification pops up, that a new, not yet initialized disk is available (should be also listed under the HDD/SSD menu). Clicked on that blue text of the notification.
5.) Confirmed that I want to initialize that disk and add it to the RAID and repair it.
Should be then the first option of "repairing the storage pool" or similar.
6.) Wait until finished (see % status in the storage pool overview).
7.) If disk 1 is finished with the RAID rebuild/resync, repeat the same process for disk 2, and then disk3, one disk at a time.
It took me about 6 hours per disk, all my docker instances were turned off while doing that, as well my Plex server was stopped.
Also you can set the speed of the rebuild to slow (lower), fast (resync faster) or custom in the storage pool settings (click on storage Pool, there is button with "Global Settngs").
Fast was about 100MB/s, I set it to 300 minimum, 600 maximum, it maxed out with around 200MB/s (not sure why a minimum setting even exits, but anyways...)
8.) Disk 4 was like disk 1, 2 and 3 regarding initializing them and using them to repair the storage pool, however before starting the repair, it showed the max capacity after the repair to be 14.xx TB instead of 7.3TB.
The usable space can only be as large as your "smallest disk capacity times 2".
So for that has been each time 4TB2. But as we have now replaced already 3 disks with a larger disk, adding the 4th and last disk would mean the usable size is 8TB2, so the Storage Manager will automatically also resize and extend the volume size, which it did in my case.
As far as I remember from what I've read, if the automatic expansion did not happen, it should be available, when you have all 4 disks replaced and upgraded, if one disk is not yet upgraded.
The last disk took a bit more the 6 hours (maybe something around 7 to 8), as it not only rebuilds the RAID, but also additionally then extends it.
The same should so far also apply to DSM6, however it can be maybe a bit slower as well the menu points will have different naming.
The process is the same, you can only extend the storage when all disks are replaced and the RAID rebuild is done.
I hope that helps maybe, even I could not help with the mdadm.