Score:2

Debian/Linux weird mdraid state

pl flag

Why is md1 state UU, md0 is U_ and md2 is _U?

nvme1n1 is broken here.

Can I safely replace nvme1n1 now?

You can see md2 state, nvme0n1p3 seems removed but nvme1n1p3 seems active. How can I safely replace the broken disk nvme1n1 here? What happend here?

Can I just clone /dev/nvme1n1p3 to /dev/nvme0n1p3? Or how can I simply resync raid in md2 array? /dev/nvme0n1p3 has a 2 month old state and current OS state is on /dev/nvme1n1p3

Edit: For now im gonna copy the partition:

dd if=/dev/nvme1n1p3 of=/dev/nvme0n1p3

Should i rebuild the raid afterwise or can i just replace the faulty disk?

cat /prod/mdstat output

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 nvme0n1p2[0] nvme1n1p2[1]
      523264 blocks super 1.2 [2/2] [UU]

md0 : active raid1 nvme0n1p1[0]
      33520640 blocks super 1.2 [2/1] [U_]

md2 : active raid1 nvme1n1p3[1]
      465894720 blocks super 1.2 [2/1] [_U]
      bitmap: 4/4 pages [16KB], 65536KB chunk

unused devices: <none>

MD states:

/dev/md0:
           Version : 1.2
     Creation Time : Sun Mar 15 21:50:14 2020
        Raid Level : raid1
        Array Size : 33520640 (31.97 GiB 34.33 GB)
     Used Dev Size : 33520640 (31.97 GiB 34.33 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Thu Jun 17 14:43:33 2021
             State : clean, degraded
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : rescue:0
              UUID : 41a6fa02:cafe993c:ba95e089:69e1cee9
            Events : 3691713

    Number   Major   Minor   RaidDevice State
       0     259        1        0      active sync   /dev/nvme0n1p1
       -       0        0        1      removed
/dev/md1:
           Version : 1.2
     Creation Time : Sun Mar 15 21:50:15 2020
        Raid Level : raid1
        Array Size : 523264 (511.00 MiB 535.82 MB)
     Used Dev Size : 523264 (511.00 MiB 535.82 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Thu Jun 17 06:26:55 2021
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : rescue:1
              UUID : 28bb218f:54e5dcee:56f85d57:5c4577ba
            Events : 131

    Number   Major   Minor   RaidDevice State
       0     259        2        0      active sync   /dev/nvme0n1p2
       1     259        6        1      active sync   /dev/nvme1n1p2
/dev/md2:
           Version : 1.2
     Creation Time : Sun Mar 15 21:50:15 2020
        Raid Level : raid1
        Array Size : 465894720 (444.31 GiB 477.08 GB)
     Used Dev Size : 465894720 (444.31 GiB 477.08 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Jun 17 14:52:09 2021
             State : clean, degraded
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : rescue:2
              UUID : 3340d601:90ff36ca:da246d8d:c26b994f
            Events : 46166134

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1     259        7        1      active sync   /dev/nvme1n1p

/dev/nvme1n1 is broken:

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: FAILED!
- NVM subsystem reliability has been degraded

SMART/Health Information (NVMe Log 0x02, NSID 0x1)
Critical Warning:                   0x04
Temperature:                        59 Celsius
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    149%
Nikita Kipriyanov avatar
za flag
In general, it is *not safe* to clone md component device. I'd rather create *new* (degraded) array and clone the *contents* of the old array, to be sure I don't clone superblocks. // Please add `mdadm --examine /dev/nvme[01]n1p3` (which shows decoded md superblocks from those devices); `blkid` and `lsblk` outputs wouldn't hurt too.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.