Score:2

mdadm partitions gone after reboot

ru flag

I had a RAID 6 setup in Cockpit UI with multiple partitions. There was one partition in particular that I wanted to extend from 10TB to 11TB using available space and attempted on /dev/md127p6 using "growpart /dev/md127p6 1". Afterwards I noticed I could access some of the mount points in the system under this array (two of them actually).

From that point I decided to restart (checked /proc/mdstat and it wasn't doing anything). Once the server came back up all of the partitions were gone for this raid.

Once the server was back online I noticed the size of the raid was different (from 189TiB to 143TiB). Obviously I screwed something up but I'm wondering if anyone has any ideas before I start over.

mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Mon May 17 20:04:04 2021 Raid Level : raid6 Array Size : 153545080832 (146432.00 GiB 157230.16 GB) Used Dev Size : 11811160064 (11264.00 GiB 12094.63 GB) Raid Devices : 15 Total Devices : 15 Persistence : Superblock is persistent

 Intent Bitmap : Internal

   Update Time : Mon Aug  2 20:05:13 2021
         State : clean
Active Devices : 15    Working Devices : 15
Failed Devices : 0
 Spare Devices : 0

        Layout : left-symmetric
    Chunk Size : 4K

Consistency Policy : bitmap

          Name : storback:backups
          UUID : c8d289dd:2cb2ded3:cbcff4cd:1e7367ee
        Events : 150328

Number   Major   Minor   RaidDevice State
   0       8       32        0      active sync   /dev/sdc
   1       8       48        1      active sync   /dev/sdd
   2       8       64        2      active sync   /dev/sde
   3       8       80        3      active sync   /dev/sdf
   4       8       96        4      active sync   /dev/sdg
   5       8      112        5      active sync   /dev/sdh
   6       8      128        6      active sync   /dev/sdi
   7       8      144        7      active sync   /dev/sdj
   8       8      160        8      active sync   /dev/sdk
   9       8      192        9      active sync   /dev/sdm
  10       8      176       10      active sync   /dev/sdl
  11       8      208       11      active sync   /dev/sdn
  12       8      224       12      active sync   /dev/sdo
  13       8      240       13      active sync   /dev/sdp
  14      65        0       14      active sync   /dev/sdq
Michael Hampton avatar
cz flag
growpart expects a block device (which contains a partition table) as its first argument, _not_ a partition.
Score:1
ru flag

Well since this wasn't a super important device I tried to wing it and grow the raid to the maximum size since the total size was incorrect. It almost appeared as though the raid size shrunk by the amount of free unpartitioned space I had previous.

I ran this command and all the partitions came back after a reboot:

mdadm --grow /dev/md127 -z max

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.