I've been trying to google and search and can't seem to find an answer to my specific problem even though many people have posted similar issues. Here we go.
Trying to mount my raid 5 array to a folder /mnt/raid. Added to fstab:
UUID=202483d5-808d-9a85-5361-c119c4bdbda4 /mnt/raid ext4 defaults 0 2
Here's the blkid uuid output:
/dev/sdc: UUID="202483d5-808d-9a85-5361-c119c4bdbda4" UUID_SUB="9a17b921-7b72-8031-71de-8049fddda330" LABEL="homeserver:0" TYPE="linux_raid_member"
/dev/sdb: UUID="202483d5-808d-9a85-5361-c119c4bdbda4" UUID_SUB="501b9b14-5c17-f798-78a5-dbd9afdeaba3" LABEL="homeserver:0" TYPE="linux_raid_member"
/dev/sde: UUID="202483d5-808d-9a85-5361-c119c4bdbda4" UUID_SUB="bafc7cbe-294d-70a5-10fa-8b4412c3a2b9" LABEL="homeserver:0" TYPE="linux_raid_member"
/dev/sdd: UUID="202483d5-808d-9a85-5361-c119c4bdbda4" UUID_SUB="76ad59f4-aa6e-1cde-2026-3077310841f8" LABEL="homeserver:0" TYPE="linux_raid_member"
Here's the error I get:
sudo mount /mnt/raid
mount: /mnt/raid: /dev/sdb already mounted or mount point busy.
Tried a different command but still no dice:
sudo mount -t ext4 /dev/md0 /mnt/raid
mount: /mnt/raid: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.
Raid Detail:
sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Nov 24 05:49:18 2021
Raid Level : raid5
Array Size : 23441682432 (22355.73 GiB 24004.28 GB)
Used Dev Size : 7813894144 (7451.91 GiB 8001.43 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Nov 26 02:22:04 2021
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : homeserver:0 (local to host homeserver)
UUID : 202483d5:808d9a85:5361c119:c4bdbda4
Events : 9306
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
4 8 64 3 active sync /dev/sde
Weird part is that dmraid comes up with nothing:
sudo dmraid -r
no raid disks
I checked smart status on each disk, just in case it was a bad drive, and they all pass.
I don't think this is an lvm (unless I'm missing something). I've scoured the internet and can't seem to find an answer. Any suggestions on how I might do this? I can't seem to figure out what I'm missing.
Running Ubuntu Server 20.04.03LTS