Score:0

How do I resize the filesystem on a RAID array?

ng flag

I've recently added a 5th drive to my software raid array -- and mdadm has accepted it:

$ lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
nvme0n1        259:0    0 894.3G  0 disk
├─nvme0n1p1    259:4    0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme0n1p2    259:5    0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme3n1        259:1    0 894.3G  0 disk
├─nvme3n1p1    259:6    0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme3n1p2    259:7    0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme2n1        259:2    0 894.3G  0 disk
├─nvme2n1p1    259:8    0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme2n1p2    259:9    0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme1n1        259:3    0 894.3G  0 disk
├─nvme1n1p1    259:10   0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme1n1p2    259:11   0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme4n1        259:12   0 894.3G  0 disk
├─nvme4n1p1    259:15   0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme4n1p2    259:16   0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10]
md0 : active raid1 nvme4n1p1[4] nvme1n1p1[2] nvme3n1p1[0] nvme0n1p1[3] nvme2n1p1[1]
      523264 blocks super 1.2 [5/5] [UUUUU]

md1 : active raid5 nvme4n1p2[5] nvme2n1p2[1] nvme1n1p2[2] nvme3n1p2[0] nvme0n1p2[4]
      3748134912 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
      bitmap: 3/7 pages [12KB], 65536KB chunk

unused devices: <none>

The issue is that my filesystem still thinks that I only have 4 drives attached and hasn't grown to take advantage of the extra drive.

I've tried

$ sudo e2fsck -fn /dev/md1
e2fsck 1.45.5 (07-Jan-2020)
Warning!  /dev/md1 is in use.
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/md1

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

/dev/md1 contains a LVM2_member file system

and

$ sudo resize2fs /dev/md1
resize2fs 1.45.5 (07-Jan-2020)
resize2fs: Device or resource busy while trying to open /dev/md1
Couldn't find valid filesystem superblock.

But so far no luck:

$ df
Filesystem            1K-blocks       Used Available Use% Mounted on
udev                  131841212          0 131841212   0% /dev
tmpfs                  26374512       2328  26372184   1% /run
/dev/mapper/vg0-root 2681290296 2329377184 215641036  92% /
tmpfs                 131872540          0 131872540   0% /dev/shm
tmpfs                      5120          0      5120   0% /run/lock
tmpfs                 131872540          0 131872540   0% /sys/fs/cgroup
/dev/md0                 498532      86231    386138  19% /boot
/dev/mapper/vg0-tmp    52427196     713248  51713948   2% /tmp
tmpfs                  26374508          0  26374508   0% /run/user/1001
tmpfs                  26374508          0  26374508   0% /run/user/1002

I hope this is enough info - but happy to provide more if it is useful.

Score:1
in flag

Since you are using lvm you have to do multiple steps:

  1. Resize lvm-disk with pvresize /dev/md1
  2. If you want to resize /tmp too, then lvextend -L +1G /dev/mapper/vg0-tmp
  3. If you dont want to keep some space for future extensions of /tmp or new volumes, assign the rest to root-volume with lvextend -l +100%FREE /dev/mapper/vg0-root
  4. Resize filesystem(s) resize2fs /dev/mapper/vg0-root and resize2fs /dev/mapper/vg0-tmp (if volume has been resized)
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.