Questions tagged as ['mdadm']
I want to try make RAID on webmin but I get
The kernel RAID status file /proc/mdstat does not exist on your system. Your kernel probably does not support RAID.
when I use zcat /proc/config.gz | grep 'RAID'
command I got this output.
$ sudo zcat /proc/config.gz| grep 'RAID'
# CONFIG_RAID_ATTRS is not set
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_MEGARAID_NEWGEN ...
I need to use RAID to my nvidia jetson tx2 server machine. I started from webmin for make RAID configuration I gor this error;
The kernel RAID status file /proc/mdstat does not exist on your system. Your kernel probably does not support RAID.
After, I tried install mdadm to my ubuntu server machihe and I got this error;
What can I do?
Ubuntu 18.04 LTS NVIDIA jetson tx2
Kernel 4.9.125-tegra
I used mdadm
to create a raid 1 with 2 disks. After some time I reinstalled the OS (without wiping that 2 disks obviously). What I've done is to assemble that two drives with the following commands:
sudo mdadm --assemble --scan
sudo mdadm --assemble /dev/md0
sudo mount /dev/md0 /mnt/md0
Now if I type sudo mdadm --detail /dev/md0
it seems that everything works fine:
/dev/md0:
Version : 1.2
...

Given 3 identical drives partitioned for an ubuntu 20.04 install as follows (swap
and /home
are on separate luks volumes to share with another Linux install on sda3/4)
Partition | Format | Mount Point |
---|---|---|
/dev/sda1 | ext4 | /boot |
/dev/sda2 | luks/ext4 | / |
/dev/sda3 | (Unused) | |
/dev/sda4 | (Unused) | |
/dev/sda5 | luks/swap | swap |
/dev/sdb | RAID 1 Member | |
/dev/sdc | RAID 1 Member | |
/dev/md0p1 | luks/ext4 | /hom ... |

I have a Supermicro 847 full of 14tb drives. I have Ubuntu desktop 20.04 installed and the server has an LSI 9211-8i HBA UNRAID FREENAS Controller. I have been trying to group the drives to create big volumes but without success. I wanted to set the drives up as JBOD so if one is failing I would only lose data on that one but not sure how to do this. I ended up trying to set up RAID0 with mdadm which wo ...

I had previously set up a RAID0 array with 4 drives of 14tb each that was getting full.
Yesterday I decided to add one more drive to it and temporarily changed the array to RAID4.
I heard it was supposed to get back to RAID0 automatically a day or two after.
Yesterday evening, my computer had an issue with the boot drive and ended up completely frozen, we tried everything but I had to reinstall Ubuntu ...
I have just re-installed Kubuntu on my machine recently, and have also installed mdadm to reassemble three raid arrays that I had previously. I didn't carry over any configuration files for these arrays, but installing mdadm automatically generated a conf file with what seems to be the correct information for the arrays.
Would I just need to run mdadm --assemble --scan
as root to reassemble the arrays? ...
One disk in a two-disks level 1 array failed. I added another disk to the array and resynchronization completed successfully. While the synchronization was running, I removed the failed volume and the array filesystem vanished.
So I have two questions:
What happened when I removed the failed array that caused the existing filesystem to vanish?
Is it possible to recover the filesystem on the array?

I have a weird mdadm situation.
I had a drive in a raid1 array go bad. I shut the machine down, and replaced the disk, did ddrescue (which succeeded without error) from the old non-failed drive to the new drive.
Now when I attempt to assemble, I get:
% sudo mdadm --verbose --assemble /dev/md3 /dev/sdg1 /dev/sdf1
mdadm: looking for devices for /dev/md3
mdadm: /dev/sdg1 is identifi ...
Recently I decided to update my home server and to add more storage. Previously I had 2 drives set up with MDADM to be RAID1, and that RAID was mounted to my share folder for samba. Now that I'm adding two more drives, I created another RAID with those two, but I don't think I'm able to mount two RAIDs to one singular folder for easy sharing without something like MergerFS to create one logical storage ...

I've seen references to udev, and also rc.local, which both seem a little cludgy to me. Are these really the only recommended ways to set this permanently?
I've been here looking through other threads trying to figure this out but, can't figure out how to change my mdadm array ownership. I've tried a bunch of commands and I just can't get it. I am very new to ubuntu. my array says its mounted at /mnt/raid0 and the device is /dev/md0. my user is storageserver5.
I'm looking to add a new drive to an existing NAS server, and would appreciate a second look at my plan before I pull the trigger. Part of the challenge is the new disk will be added to an existing RAID, which itself is encrypted, and also makes up a physical-volume in an existing volume-group.
Current Setup:
There is a single logical-volume ('media') and volume-group ('raid') made up of two physi ...

I have two drives (HDD) in a RAID1 array. The issue here is that after I start the system, one time they are mounted as desired and the other time they are not. As can bee seen in the pictures with the command lsblk, I find that the one time they are listed as sdc->sdc1->md127 and sdd->sdd1->md127 and the other time (the desired I believe) they are listed as sdc->sdc1->md0 and sdd-> ...
I just bought a LaCie 2big RAID drive. I comes in the box as configured for RAID 0, but I want to set it to RAID 1 to get redundant backups (as well as reformat it ext4).
I'm not sure what program I should use to set the Hardware RAID to RAID 1. LaCie offers tools, but only for MacOS and Windows.
The drive will mount:
ralmond@cherry:~$ sudo mdadm --examine /dev/sdk
[sudo] password for ralmond:
/dev/sd ...
Is there a way to ionice and renice the raid consistency check Ubuntu 20.04? I found how to adjust schedule here: mdadm raid 5 pairity check control / new behaviors in Ubuntu 20.04
I have an array, 25TB raid 6, which takes 18 hours to check, so I need the system to be usable during that time. Ubuntu 18.04 had an --idle option for checkarray run from /etc/cron.d/mdadm, but I have not found anything ...
The /dev/nvme drives are allocated to a ZFS pool while /dev/sda
& /dev/sdb
are in software RAID 1. How am I able to remove this software RAID without any data loss or a required reboot if possible.
Disk /dev/nvme2n1: 3.5 TiB, 3840755982336 bytes, 7501476528 sectors
Disk model: SAMSUNG MZQLB3T8HALS-00007
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / ...
I am experiencing a strange problem with the update that was shipped yesterday (linux libs 5.4.0-77). The machine is stuck in the update with:
update-initramfs: Generating /boot/initrd.img-5.4.0-77-generic
It is reading very slowly all the disks as it seems, the process mdadm
and later grub-prob
ran over night, spending several hours of CPU. Any hints on what is going on?
I ran into a huge issue. I planned to replace the motherboard in my HTPC, it was assembled but it turned out that my new mb is faulty and I had to put back the old one. My old Ubuntu 20.04.2 is booted, all drives mounted except the RAID 1 ARRAY (sdc and sdd). I figured it out that the new motherboard's UEFI bios probably deleted the superblocks from the raided HDDs. I checked these:
blkid
says to both h ...
My system is not retaining my raid config after reboot. I have checked the fstab and mdadm.conf and everything looks right.
I have set up a local server with an old laptop so I was using RAID 1 mirroring for reliability= 2* 1TB HD SATA 2.5". I did use the Intel RST built in functionality. Only Ubuntu Desktop version recognized the Intel RAID configuration (I don't know why the server version didn't).
Anyway, while mounting back again everything after the first setup, unluckily a cable didn't get connected properly. Ev ...
I accidentally erased my files from my MDADM raid when I created a Docker container and mapped my raid but after that all files in the raid disappeared. The discs are not currently being written or read. I want to ask what is the way I can recover my files? Unfortunately I don't have a backup. Also I am open to any suggestions even to attach disks to Windows machine. This is my Docker compose config fil ...