Score:-1

How do I resize the filesystem on a RAID0 array?

cn flag

My server consists of 2x512GB and 1x3.5TB disks.

root@bb2 ~ # lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
nvme0n1     259:0    0 476.9G  0 disk
├─nvme0n1p1 259:3    0  10.7G  0 part
│ └─md0       9:0    0    32G  0 raid0 [SWAP]
├─nvme0n1p2 259:4    0     1G  0 part
│ └─md1       9:1    0  1022M  0 raid1 /boot
└─nvme0n1p3 259:5    0 465.3G  0 part
  └─md2       9:2    0   1.4T  0 raid0 /
nvme1n1     259:1    0   3.5T  0 disk
├─nvme1n1p1 259:6    0  10.7G  0 part
│ └─md0       9:0    0    32G  0 raid0 [SWAP]
├─nvme1n1p2 259:7    0     1G  0 part
│ └─md1       9:1    0  1022M  0 raid1 /boot
└─nvme1n1p3 259:8    0 465.3G  0 part
  └─md2       9:2    0   1.4T  0 raid0 /
nvme2n1     259:2    0 476.9G  0 disk
├─nvme2n1p1 259:9    0  10.7G  0 part
│ └─md0       9:0    0    32G  0 raid0 [SWAP]
├─nvme2n1p2 259:10   0     1G  0 part
│ └─md1       9:1    0  1022M  0 raid1 /boot
└─nvme2n1p3 259:11   0 465.3G  0 part
  └─md2       9:2    0   1.4T  0 raid0 /

The disk nvme1n1 259:1 0 3.5T 0 disk is not fully involved into RAID0

root@bb2 ~ # mdadm --detail /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Thu May  4 18:14:01 2023
        Raid Level : raid0
        Array Size : 1463221248 (1395.44 GiB 1498.34 GB)

My attempt to increase md2 to the max was not successful.

root@bb2 ~ # sudo mdadm --grow /dev/md2 -z max
mdadm: Cannot set device size in this type of array.

Can you suggest to me how I can utilize all space from nvme1n1?

in flag
The manual says for RAID0: `You have two or more devices, of approximately the same size`. My guess would be you can't.
in flag
https://superuser.com/questions/615645/why-cant-raid-0-utilise-all-disk-space-on-two-different-sized-disks
Mihhail Sidorin avatar
cn flag
thank you, seems like a logical reason
Mihhail Sidorin avatar
cn flag
I have another server with 2x1TB and 1x3.5TB and ``` root@sh2 ~ # mdadm --detail /dev/md3 /dev/md3: Version : 1.2 Creation Time : Thu May 4 18:13:06 2023 Raid Level : raid0 Array Size : 5742618624 (5476.59 GiB 5880.44 GB) ``` possible to merge them there. How so?
br flag
Can I ask why R0?
jm flag
You can add all the devices together by partitioning the larger disks with partition sizes matching the smaller disk size. Just be aware that with RAID 0, if you lose *any* device or partition, all of you data is gone. With the data striped across the devices you have little chance of recovering anything other than data that fits inside a single stripe. My only use case for RAID 0 is when you have limited hardware resources, large data sets that cannot be partitioned, and the data is completely reproducible and allowed to be lost on a moment's notice.
djdomi avatar
za flag
the deal is, only the smallest drive counts for raid 0. only if you would use JBOD the size does not matter
Score:0
ca flag

Linux mdraid RAID0 pretends all disks to be the same size. Otherwise, it will force the smaller disk size to all other component devices. The same is true for all other RAID levels as 1,5,6,10. The only exception is for LINEAR arrays - ie: concatenated devices.

The other approach is to extend the volume rather than the backing array, but this requires the use of a volume manager rather than raw partitions (which you are using). Growing a volume via LVM will conceptually do the same thing as a LINEAR array - it concatenate two different block devices in a single one.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.