Score:0

mdadm raid1 won't resize | --grow gives unchanged at

br flag

I've been trying to resize (--grow) this RAID array for a week and I know the answer is simple and probably staring me in the face, but I'm not sleep deprived and lost.

The array is RAID1 and used to consist of two 2TB hard drives and was not configured by me. The client ran out of space and I replaced the drives with two 4TB drives. I copied over the partition tables with sgdisk or something like that, but later removed some partitions and made the one in question bigger.

I've removed one of the drives on purpose so I don't mess up both of them. You may see references to it here and there, it's as expected

TL;DR
--grow --size=max doesn't work, gives unchanged at 3896741888K
--update=devicesize shows larger size, but --grow still doesn't work

sda and sdb are identical and look like this sd{a,b}3 is the partition in question

Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors                                                                                                                 
Units: sectors of 1 * 512 = 512 bytes                                                                                                                                           
Sector size (logical/physical): 512 bytes / 4096 bytes                                                                                                                          
I/O size (minimum/optimal): 4096 bytes / 4096 bytes                                                                                                                             
Disklabel type: gpt                                                                                                                                                             
Disk identifier: 4230A82E-F626-4A32-B4FD-F0A91A30F64C                                                                                                                           
                                                                                                                                                                               
Device          Start        End    Sectors  Size Type                                                                                                                     
/dev/sda1        2048    3905535    3903488  1.9G Linux RAID                                                                                                                    
/dev/sda2     3905536    3907583       2048    1M BIOS boot                                                                                                                     
/dev/sda3     3907584 7797653503 7793745920  3.6T Linux RAID                                                                                                                    
/dev/sda7  7797653504 7814037134   16383631  7.8G Linux RAID 

As you can see, 3.6TB or so. Looking good so far. Lets look at the partition itself, again both drives look the same mdadm --examine /dev/sda3

/dev/sda3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 49d82293:715e6baf:3f0a3f79:b2089367
           Name : c4root:1
  Creation Time : Wed Apr  2 20:56:22 2014
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 7793483776 (3716.22 GiB 3990.26 GB)  #yay (hopes going up)
     Array Size : 3896741888 (3716.22 GiB 3990.26 GB)  #oldsize (as expected)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=0 sectors
          State : clean
    Device UUID : 3aa675c5:761465e5:886a395d:95eac69d

    Update Time : Mon Oct 25 04:35:56 2021
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 68eed7f7 - correct
         Events : 1021990


   Device Role : Active device 0
   Array State : A. ('A' == active, '.' == missing, 'R' == replacing)

Looking good again. We've got the Avail Dev Size looking good, the Array size is where it's currently at. Lets move on to the array info mdadm --details /dev/md1

/dev/md1:
        Version : 1.2
  Creation Time : Wed Apr  2 20:56:22 2014
     Raid Level : raid1
     Array Size : 3896741888 (3716.22 GiB 3990.26 GB)
  Used Dev Size : 3896741888 (3716.22 GiB 3990.26 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Mon Oct 25 04:35:56 2021
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : c4root:1
           UUID : 49d82293:715e6baf:3f0a3f79:b2089367
         Events : 1021990

    Number   Major   Minor   RaidDevice State
       2       8        3        0      active sync   /dev/sda3
       2       0        0        2      removed

That array size is still showing the old size. I read online that you may need to do an update on the device size as you assemble it. Lets do that.

# mdadm -S /dev/md1
mdadm: stopped /dev/md1
# mdadm --assemble --update=devicesize /dev/md1
Size was 7793483776
Size is 7793483776
mdadm: /dev/md1 has been started with 1 drive (out of 2).

Okay, looks like it knows it has room to grow. Lets try to grow it!

# mdadm --grow --size=max /dev/md1                                                                                                                                 
mdadm: component size of /dev/md1 unchanged at 3896741888K

and the -D details STILL show the old size...

/dev/md1:
        Version : 1.2
  Creation Time : Wed Apr  2 20:56:22 2014
     Raid Level : raid1
     Array Size : 3896741888 (3716.22 GiB 3990.26 GB)
  Used Dev Size : 3896741888 (3716.22 GiB 3990.26 GB)
   Raid Devices : 2
  Total Devices : 1

What in the world am I missing?

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.