Score:0

RAID5 compatability: zeroed superblock: WD Red 8TB drives: wd80efax-68LHPN0 and 2x 68KNBN0

in flag

Summarizing the problem: I have been scraping by with JBOD for years, but finally need a real 'micro data center'. I bought 3 drives for my centos8-stream box over a couple months, but I have heard it could be both good and bad to get the same drives from same lot number. They were all WD Red TB drives: WD80EFAX*. But devil in the details, the first one was the helium filled wd80efax-68LHPN0, Mfg In July 2019, the later two on sale were air filled WD80EFAX-68KNBN0 from 2020. The cases looked different, but I proceeded anyway as they were the same major version and most retailers don't even list or differentiate the rest. Unfortunately my first attempts are not going well, and sure enough it is the lone helium filled one that seems to be not re-joining the mdadm/RAID array.

Details and any research: I am using this as a storage/NAS, not a webserver, for now. I don't need it to be available at boot, in fact, might not want that possibility depending on how the computer is being used that day. I might need to run out for a bit, and I have no trusted partner physically close that day, so I need to understand and adjust the configuration at any time. Not that I am that cool, but it would suck to have this unreliable array write at 1/10 the speed at the worst possible time. Anyway,

I created my array as such: mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd

skipped setting up a partition, as heard this is not needed if you just want one contiguous volume that you'll never change, and can make things more difficult, even. It seems to work fine except the one drive that won't associate with the array on its own.

cryptsetup luksFormat /dev/md0, to format my raid array for some luks type

opened, set up filesystem. For future flexibility, I chose LVM pvcreate /dev/mapper/devicename

verify successful creation with: pvs (looks good)

create volume group with vgcreate storage /dev/mapper/devname, and then put the lvm on that. lvcreate -n myLVMname -L 16T storageName

Actually had to use 14.55T as it appears some space is reserved or there is space miscalculation at this size as the discrepancy between TB and TiB grows. Anyway, I can see it's all good with: pvs, vgs and lvs.

Time to make the FS:

mkfs.ext4 /dev/mapper/devname, and at this point lsblk shows all my drives like so:

sde // disk

└─md0 // our raid

└─crypt-storage //enc container

└─storage-https //mountable vol

with identical printout for /dev/sdd, and /dev/sdc. It is opened and mounted now.

opened /etc/fstab, can see `/dev/mapper/devicename is mounted at /run/media/username/moutpoint.

If I wanted it to mount at boot, at this point, I would need to do some extra things. But I don't, I want to mount it myself as needed. For good measure, create mdadm.conf with mdadm --verbose --detail --scan > /etc/mdadm.conf, though it should be able to rebuild the array with the data from the superblocks right?

If I want to mount at boot, I need to update /etc/crypttab with the UUID of my raid device /dev/md0, update my initramfs, with dracut -f -v, and update grub with grub2-mkconfig -o /etc/grub2-efi.cfg, or for centos, the config is usually linked to /boot/efi/EFI/centos/grub.cfg. However I did not update grub or crypttab since I didn't think I needed to/don't want to if don't have to, this should all happen way after boot right.

I rebooted, but lsblk shows problem: (trimmed)

` sdc
└─sdc1
sdd
└─md0
sde
└─md0

`

it seems that the helium filled wd80efax-68LHPN0 (/dev/sdc here) is not being put into the md0 array (even before the lvm on luks). It thinks there is a partition 'sdc1'. Is this due to a problem with how I configured it, different hardware or different firmware? like mentioned before, retailers often won't provide those last characters of the model number, so I would hope all WD80EFAX* would work together? WD doesn't seem to provide firmware for plain hard drives?? https://community.wd.com/t/firmware-wdc-wd80efzx-68uw8n0/218166/2 Is duckduckgo not finding it? (sometimes it can beat google, for sure the WD search function)

I have taken apart some logic boards before, and always wanted to mess with the firmware, but not while all of my data has just been moved (this was the backup) to this encrypted stack, with some previous drives already wiped to be used as large thumbdrives. Shame on me for not having another backup, it is still readable, but I am also resource constrained and trying to consolidate these 5+ JBOD drives so I can organize this never ending increase of data in my life to have time for more volunteer/business efforts. At least now I have learned the dreaded step of recovering/working on an encrypted raid array, to some degree.

When appropriate, describe what you’ve tried: I can stop the md0 device (mdadm --stop /dev/md0, add it: mdadm --add /dev/md0 /dev/sdc (and sdc1 though wouldn't be adding a partition)

(not re-add it, and afraid of using --assume-clean (heard if not an expert don't use)), but then it needs a 2 day re-sync every time I boot the machine which is needless wear not to mention downtime. Not good for production or really anything. Using mdadm --detail /dev/md0 state shows it is indeed 'degraded', one device is 'removed'. I can still unlock the two drives, mount, and read it, writing seems slower, but my data is now hanging off an encrypted cliff if these 2 remaining drives fail.

So I (re)added the same drive, let it sync, using watch on /proc/mdstat. I made sure I updated my initramfs (though I should not need to?) with dracut -f -v, copy all drive info with blkid just in case, reboot. Same issue, /dev/sdc (the older but same major model number helium drive) is not part of md0.

using mdadm -v --assemble /dev/md0 --uuid=<info from /etc/mdadm.conf> it scans, and curiously says: no RAID superblock found on /dev/sdc or /dev/sdc1, expected magic a92something, got 00000000*, for /dev/sdc1 (the "partition" (that shouldn't be there), not device), expected magic a92somethingRAIDUUIDthing, got 00000401.

The good drives : mdadm: /dev/sdf is identified as a member of /dev/md0, slot <1 or 2>

So I suspect the problem is here if it is something I did in the configuration. Some posts elsewhere suggest something might be wiping the superblock, but I am not aware of anything in my shutdown or reboot that would do this, but I have not done an evil maid audit in some time, nor do I want to be required to do one, and I would use the tools and notes on this machine to do so. It's probably not that, but its possible.

I could still update grub, or my /etc/crypttab, but it really seems I shouldn't need to do this, this is basically a giant RAID thumbdrive I'll sometimes need to hook up to the rest of the system. It seems something else screwy is going on relating to superblocks, hardware or firmware, especially since there are visible differences on the case and the board between the helium drive, and the two others that work fine (unfortunately).

any ideas?

like:

  • better examine my startup programs? (seems unlikely, but)
  • some way to flash new firmware to the drives? Though I would like OEM or open methods if possible. They read the same size blocks etc. right so this shouldn't matter anyway?
  • buy more drives until I find a match or go scour ebay and guess?? sounds expensive and wasteful
    • I could try guessing at the local computer store, but what is someone to do who doesn't have one of these?
    • what would I do with the helium drive after if it's useless as a backup drive for the array?
  • post on SuperUser or somewhere else? This place seemed great

not interested in:

  • booting windows (it doesn't play nice with Linux and completely hides my unsigned drivers for projects with override to get to the override buried in bios, because f*ck M$)
    • though I do have an 8.1 box for M$ Teams in need of recovery key, and some HDD cradles if I must
  • some sketchy closed-source hard drive utility even if handcoded ASM by Jesus Christ himself
  • getting rid of raid (I know there are horror stories, especially adding encryption, arguably not needed for a static system but depends on your threat env, but there's also tons of companies betting their business on them). I would like to learn from this one and build some more, now that this incident has forced me to learn a bunch about them.

There seem to be enough open tools and knowledge to figure this out, it isn't rocket science, but it is a ton of moving parts working together I might not be completely familiar with, after a few weeks of tinkering, crawling posts, and RTFMs, thought I should ask. I hope I have provided enough detail but not dragged it out.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.