tl;dr; I tried to grow my raid array by adding 2 10 tb disks to an exsistant 4x4TB raid 5 array, had a failure with the system disk (not part of the raid) shortly after the growth finished and after recovering the system the new disks are not detectable as md volumes.
Question is: Is my data gone? If not how do I get the raid to discover the data on the 10TB drives?
Longer version:
I have a home NAS I have built on a debian system that was a 12TB array (4 4 TB drives with one drive worth of parity) using a software raid. It is encrypted with luks. I recenly bought 2 10TB disks to add to the array and followed the following steps (note that the disk labes change between this list and the next one):
parted -s -a optimal /dev/sde mklabel gpt
# new disk 1 (this this step was proably not nessicary and may have actually messed me up)
parted -s -a optimal /dev/sdg mklabel gpt
# new disk 2 (this this step was proably not nessicary and may have actually messed me up)
parted /dev/sde
-
$ sudo parted /dev/sde
GNU Parted 3.5
Using /dev/sde
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: ATA WDC WD101EFBX-68 (scsi)
Disk /dev/sde: 10.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
(parted) mkpart
Partition name? []? ext4
File system type? [ext2]? ext4
Start? 1
End? -1
(parted) quit
Information: You may need to update /etc/fstab.
- Same for sdg
sudo mdadm --add /dev/md0 /dev/sde
sudo mdadm --add /dev/md0 /dev/sdg
sudo mdadm --grow --raid-devices=6 /dev/md0
- wait over a day
-
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sdg[6] sde[5] sdf[2] sdb[0] sdd[4] sdc[1]
11720659200 blocks super 1.2 level 5, 128k chunk, algorithm 2 [6/6] [UUUUUU]
[===============>.....] reshape = 78.7% (3077233408/3906886400) finish=183.4min speed=75369K/sec
bitmap: 0/30 pages [0KB], 65536KB chunk
- wait overnite
-
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sdg[6] sde[5] sdf[2] sdb[0] sdd[4] sdc[1]
19534432000 blocks super 1.2 level 5, 128k chunk, algorithm 2 [6/6] [UUUUUU]
bitmap: 0/30 pages [0KB], 65536KB chunk
- come back to computer and getting kernel errors related to sata
- panic
- buy new hard drive
- unplug all raid drives
- install latest debian on new hard drive
- shutdown, plug in raid drives
- raid can't find super blocks on new (10 TB) drives
smartctrl examples:
root@nas2:/home/myusername# smartctl -i /dev/sde
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.0.0-5-amd64] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red
Device Model: WDC WD40EFRX-68N32N0
Serial Number: WD-WCC7K0PT8X1X
LU WWN Device Id: 5 0014ee 266680715
Firmware Version: 82.00A82
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: In smartctl database 7.3/5319
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Wed Dec 7 08:47:55 2022 MST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
root@nas2:/home/myusername# smartctl -i /dev/sdf
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.0.0-5-amd64] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Device Model: WDC WD101EFBX-68B0AN0
Serial Number: VHG5Y0DM
LU WWN Device Id: 5 000cca 0c8c2b2c4
Firmware Version: 85.00A85
User Capacity: 10,000,831,348,736 bytes [10.0 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database 7.3/5319
ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Wed Dec 7 08:47:57 2022 MST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
root@nas2:/home/myusername#
mdadm examples:
root@nas2:/home/myusername# smartctl -i /dev/sdf
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.0.0-5-amd64] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Device Model: WDC WD101EFBX-68B0AN0
Serial Number: VHG5Y0DM
LU WWN Device Id: 5 000cca 0c8c2b2c4
Firmware Version: 85.00A85
User Capacity: 10,000,831,348,736 bytes [10.0 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database 7.3/5319
ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Wed Dec 7 08:47:57 2022 MST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
root@nas2:/home/myusername# mdadm -E /dev/sde
/dev/sde:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 8fdcb089:a833ef1d:b160886d:412e7534
Name : nas2:0 (local to host nas2)
Creation Time : Sun Jun 2 07:39:33 2019
Raid Level : raid5
Raid Devices : 6
Avail Dev Size : 7813774256 sectors (3.64 TiB 4.00 TB)
Array Size : 19534432000 KiB (18.19 TiB 20.00 TB)
Used Dev Size : 7813772800 sectors (3.64 TiB 4.00 TB)
Data Offset : 262912 sectors
Super Offset : 8 sectors
Unused Space : before=262832 sectors, after=1456 sectors
State : clean
Device UUID : 192d4876:e15d5e77:db7f147a:2691029b
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Dec 5 20:48:07 2022
Bad Block Log : 512 entries available at offset 24 sectors
Checksum : a4b60d13 - correct
Events : 171440
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 2
Array State : AAAAAA ('A' == active, '.' == missing, 'R' == replacing)
root@nas2:/home/myusername# mdadm -E /dev/sdf
/dev/sdf:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
root@nas2:/home/myusername# mdadm -E /dev/sdf1
mdadm: No md superblock detected on /dev/sdf1.
root@nas2:/home/myusername#
dumpe2fs example:
root@nas2:/home/myusername# dumpe2fs /dev/sde
dumpe2fs 1.46.6-rc1 (12-Sep-2022)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sde
Couldn't find valid filesystem superblock.
/dev/sde contains a linux_raid_member file system labelled 'nas2:0'
root@nas2:/home/myusername# dumpe2fs /dev/sdf
dumpe2fs 1.46.6-rc1 (12-Sep-2022)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sdf
Couldn't find valid filesystem superblock.
Found a gpt partition table in /dev/sdf
root@nas2:/home/myusername# dumpe2fs /dev/sdf1
dumpe2fs 1.46.6-rc1 (12-Sep-2022)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sdf1
Couldn't find valid filesystem superblock.
/proc/mdstat:
root@nas2:/home/myusername# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sda[1](S) sdc[4](S) sde[2](S) sdb[0](S)
15627548512 blocks super 1.2
unused devices: <none>`
mdadm -d:
`root@nas2:/home/myusername# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Raid Level : raid5
Total Devices : 4
Persistence : Superblock is persistent
State : inactive
Working Devices : 4
Name : nas2:0 (local to host nas2)
UUID : 8fdcb089:a833ef1d:b160886d:412e7534
Events : 171440
Number Major Minor RaidDevice
- 8 64 - /dev/sde
- 8 32 - /dev/sdc
- 8 0 - /dev/sda
- 8 16 - /dev/sdb
mdadm.conf:
root@nas2:/home/myusername# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
DEVICE /dev/sd[abcdef]
DEVICE /dev/sd[abcdef]1
DEVICE partitions
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# This configuration was auto-generated on Tue, 06 Dec 2022 20:18:27 -0700 by mkconf
ARRAY /dev/md/0 metadata=1.2 UUID=8fdcb089:a833ef1d:b160886d:412e7534 name=nas2:0
attempting to start the array:
root@nas2:/home/myusername# mdadm --assemble /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sde
mdadm: /dev/sda is busy - skipping
mdadm: /dev/sdb is busy - skipping
mdadm: /dev/sdc is busy - skipping
mdadm: /dev/sde is busy - skipping
root@nas2:/home/myusername# mdadm --stop /dev/md127
mdadm: stopped /dev/md127
root@nas2:/home/myusername# mdadm --assemble /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sde
mdadm: /dev/md127 assembled from 4 drives - not enough to start the array.
root@nas2:/home/myusername# mdadm --stop /dev/md127
mdadm: error opening /dev/md127: No such file or directory
root@nas2:/home/myusername# mdadm --assemble /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sde /dev/sdd /dev/sdf
mdadm: /dev/sda is busy - skipping
mdadm: /dev/sdb is busy - skipping
mdadm: /dev/sdc is busy - skipping
mdadm: /dev/sde is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sdd
mdadm: /dev/sdd has no superblock - assembly aborted
lsblk output
root@nas2:/home/myusername# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3.6T 0 disk
└─md0 9:0 0 0B 0 md
sdb 8:16 0 3.6T 0 disk
└─md0 9:0 0 0B 0 md
sdc 8:32 0 3.6T 0 disk
└─md0 9:0 0 0B 0 md
sdd 8:48 0 9.1T 0 disk
└─sdd1 8:49 0 9.1T 0 part
sde 8:64 0 3.6T 0 disk
└─md0 9:0 0 0B 0 md
sdf 8:80 0 9.1T 0 disk
└─sdf1 8:81 0 9.1T 0 part
nvme0n1 259:0 0 465.8G 0 disk
├─nvme0n1p1 259:1 0 1.9G 0 part /boot/efi
├─nvme0n1p2 259:2 0 201.7G 0 part /
├─nvme0n1p3 259:3 0 18.6G 0 part /var
├─nvme0n1p5 259:4 0 10M 0 part /tmp
└─nvme0n1p6 259:5 0 243.5G 0 part /home