Adding Storage space to RAID1 setup

de flag

Currently I have an Ubuntu 18.04.6 LTS server with 2 6TB HDs setup in RAID1 like so:

~$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdd2[0] sdb2[1]
      5859412992 blocks super 1.2 [2/2] [UU]
      bitmap: 1/44 pages [4KB], 65536KB chunk

The space on the drives will run out soon so I bought 2 16TB HDs I want to add (already physically connected in the server but not setup). From what I understand I cannot add these as a separate raid1 configuration (16TB mirroring + 6TB mirroring) and need to move to raid 10. Is this true? I can't just have the two 16TB also in RAID1 and mounted as a different folder?

Can I use the 2x 16TB HDs in combination with the 2x 6TB ones in a RAID 10 or do they all have to be the same size?

How do I go about adding the 2 HDs and migrating to the new RAID setup without losing the existing data?

Business requirements:

  1. Redundancy / Fault tolerance
  2. Fast read/write (big data)
  3. Increase HD space, does not necessarily have to act as one drive (can be a new mount point / folder if easier)


Following the instructions at the link below, I added the two drives as an additional raid1 using the following commands, rebooted the computer, and now can't ssh into it.

sda            14.6T                   disk
sdb             5.5T                   disk
├─sdb1          953M vfat              part
└─sdb2          5.5T linux_raid_member part
  └─md0         5.5T LVM2_member       raid1
    ├─vg-swap 186.3G swap              lvm   [SWAP]
    ├─vg-root  93.1G ext4              lvm   /
    ├─vg-tmp   46.6G ext4              lvm   /tmp
    ├─vg-var   23.3G ext4              lvm   /var
    └─vg-home   5.1T ext4              lvm   /home
sdc            14.6T                   disk
sdd             5.5T                   disk
├─sdd1          953M vfat              part  /boot/efi
└─sdd2          5.5T linux_raid_member part
  └─md0         5.5T LVM2_member       raid1
    ├─vg-swap 186.3G swap              lvm   [SWAP]
    ├─vg-root  93.1G ext4              lvm   /
    ├─vg-tmp   46.6G ext4              lvm   /tmp
    ├─vg-var   23.3G ext4              lvm   /var
    └─vg-home   5.1T ext4              lvm   /home
~$ sudo mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/sda /dev/sdc
sudo mkfs.ext4 -F /dev/md1
sudo mkdir -p /mnt/md1
sudo mount /dev/md1 /mnt/md1
~$ df -h -x devtmpfs -x tmpfs
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/vg-root   92G  7.5G   79G   9% /
/dev/sdd1            952M  4.4M  947M   1% /boot/efi
/dev/mapper/vg-var    23G  6.0G   16G  28% /var
/dev/mapper/vg-tmp    46G   54M   44G   1% /tmp
/dev/mapper/vg-home  5.1T  2.5T  2.4T  51% /home
/dev/md1              15T   19M   14T   1% /mnt/md1
~$ sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
ARRAY /dev/md/0 metadata=1.2 name=mypc:0 UUID=someweirdhash
ARRAY /dev/md1 metadata=1.2 name=mypc:1 UUID=someweirdhash
~$ sudo update-initramfs -u
update-initramfs: Generating /boot/initrd.img-4.15.0-166-generic
~$ sudo reboot

Cannot ssh into server after reboot.

DID NOT DO THIS: (what are the last two zeros below?) I wasn't sure what this command does and imagined it could set the new array to be the boot one, so maybe not running it broke it:

~$ echo '/dev/md1 /mnt/md1 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
in flag
If point two is a legitimate requirement, then RAID1 is not the solution. As for the manner of going about it, I do not see why your system could not support multiple RAID arrays on different mount points. One of six gigabytes and the second of sixteen
Terrance avatar
id flag
RAID0 can be all different sizes, but a RAID1 is a mirror so they have to be the same. However, a RAID0+1 is a mirror of stripes where you could mirror a striped set of RAID0 drives but both would have to be the same size like a RAID0 of a 6TB+16TB. But growing a RAID, etc always has an inherit risk of loss of data so always best to make sure you have a full backup before performing any RAID changes. See: for a good explanation of RAID10 vs RAID0+1
CyborgDroid avatar
de flag
@matigo I tried adding the two new drives as an additional raid1 and now can't ssh into the server (see update in the question). I tried connecting a monitor to it but that didn't work for some reason even before these changes so not sure what to do now.

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.