Score:0

DRBD with external meta disk

ru flag

I am attempting to create a failover with DRBD and I have two partitions.

Partition 1) /dev/sda4 is setup for the KVMs I will be creating. Partition 2) /dev/sda5 is for the DRBD metadata

My config file is below

resource r0 {
    protocol C;
    startup {
            wfc-timeout  15;
            degr-wfc-timeout 60;
    }
    net {
            cram-hmac-alg sha1;
            shared-secret "SECRET_KEY";
    }
    on Server1{
            device /dev/drbd0;
            disk /dev/sda4;
            address IP:7788;
            meta-disk /dev/sda5;
    }
    on Server2{
            device /dev/drbd0;
            disk /dev/sda4;
            address IP:7788;
            meta-disk /dev/sda5;
    }

}

When I run drbdadm create-md r0 it runs successfully but it creates /dev/drbd0 on both partitions.

    ├─sda4      8:4    0  7.2T  0 part
    │ └─drbd0 147:0    0  7.2T  1 disk
    └─sda5      8:5    0  399M  0 part
      └─drbd0 147:0    0  7.2T  1 disk

It's my understanding that it should only create it on the meta disk which is /dev/sda5. The reason I setup the meta disk is to avoid any writing to sda4. Am I correct in that reasoning or am I missing something.

Score:2
nr flag

I confirmed this is how lsblk output looks on AlmaLinux 8.5 when using external meta-data with DRBD. The DRBD device is only a single virtual block device layered over both partitions in your setup.

It works as expected. If you inspect the block devices using other methods/utilities you'll see that you only have a single /dev/drbd0, and the metadata partition you created is the size you'd expect.

For example, my configuration using two LVM volumes:

resource r0 {
    protocol C;
    startup {
            wfc-timeout  15;
            degr-wfc-timeout 60;
    }
    net {
            cram-hmac-alg sha1;
            shared-secret "SECRET_KEY";
    }
    on Server1{
            device /dev/drbd0;
            disk /dev/drbdpool/data;      # 1GiB lvm on /dev/sdb
            meta-disk /dev/drbdpool/meta; # 4MiB lvm on /dev/sdb
            192.168.1.11:7788;
    }
    on Server2{
            device /dev/drbd0;
            disk /dev/drbdpool/data;      # 1GiB lvm on /dev/sdb
            meta-disk /dev/drbdpool/meta; # 4MiB lvm on /dev/sdb
            192.168.1.12:7788;
    }

Shows exactly what I'd expect in /proc/partitions:

# cat /proc/partitions 
major minor  #blocks  name

   8        0   20480000 sda
   8        1    2048000 sda1
   8        2   18430976 sda2
   8       16    8388608 sdb
 253        0       4096 dm-0
 253        1    1048576 dm-1
 147        0    1048576 drbd0

blockdev --report looks good too:

# blockdev --report /dev/drbd0
RO    RA   SSZ   BSZ   StartSec            Size   Device
rw   256   512  4096          0      1073741824   /dev/drbd0
# blockdev --report /dev/drbdpool/meta 
RO    RA   SSZ   BSZ   StartSec            Size   Device
rw  8192   512  4096          0         4194304   /dev/drbdpool/meta
# blockdev --report /dev/drbdpool/data
RO    RA   SSZ   BSZ   StartSec            Size   Device
rw  8192   512  4096          0      1073741824   /dev/drbdpool/data

This is likely where lsblk is getting confused:

# cat /sys/block/drbd0/size 
2097152
# cat /sys/block/dm-0/size 
8192
# cat /sys/block/dm-1/size 
2097152
# cat /sys/block/dm-0/holders/drbd0/size 
2097152
# cat /sys/block/dm-1/holders/drbd0/size 
2097152
ru flag
OK...then I am confused as to why DRB0 is taking up the entire disk of SDA4. For some reason it is allocated 7.2T. Why is that if the metadata is supposed to be on SDA5?
Matt Kereczman avatar
nr flag
You configured DRBD to do that by setting `disk /dev/sda4;`. This is how DRBD works: You give an entire disk/partition to DRBD, and then you use the resulting DRBD device just as you would any other disk/partition. I assume you're going to want to format `/dev/drbd0` with whatever filesystem you want to use and `mount /dev/drbd0 /var/lib/libvirt/images`, or wherever you configured libvirt to store your VM images. Since you've mounted the DRBD device, everything you write into that mount point will be replicated to the peer via the DRBD device. DRBD's metadata is on `/dev/sda5`...
ru flag
OK...That makes sense. I assumed that DRB0 and the VMs would reside side by side on SDA4 but what you are saying is that any data I use needs to be on top of DRBD0 correct?
Score:-1
ru flag

Finally figured this out. By putting the volume group on top of DRBD0 everything is now working.

I did run into a problem when creating a volume group on the device and had to add filter to lvm.conf

I got an error that said

Cannot use device /dev/drbd/ with duplicates

I just added the this to the devices section

filter = [ "r|/dev/sda4|", "r|/dev/disk/|", "r|/dev/block/|", "a/.*/" ]

Everything works great now

Matt Kereczman avatar
nr flag
Layering LVM on top of DRBD didn't change anything meaningful. You could have used it the way it was, since there were no issues the way you had it. It's more common to put LVM underneath DRBD so it's easier to grow the device later. Also, you'll have to activate/deactivate the VGs on failover with this setup.
ru flag
There was an issue the way I had it. It didn't work. I could not create an LVMs in the way it was setup
Matt Kereczman avatar
nr flag
This is the first time you've mentioned LVM as a requirement. You're going to confuse people who stumble across your posts in the future if you're omitting details like this.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.