Score:1

Configure ceph block storage pool in cockpit

nl flag

I have a plain install on machine C with Oracle Linux 9.1 and cockpit and cockpit-machines. On machines A and B I have a ceph cluster configured which defines an rbd block storage pool for VM disks. Having copied a minimal config and the keyrings onto machine C I can "access" the ceph cluster as in the command ceph osd lspools on machine C returns all configured pools as expected.

In the cockpit UI however, the only options I see for configuring a new storage pool, are filesystem and network file system, nothing else.

How can I configure the existing rbd storage pool to be available to new VMs I create in the cockpit UI?

Score:2
us flag

I'm not familiar with cockpit but with ceph. Reading the cockpit docs I would probably choose physical disk as source, and the physical disk is a mapped rbd device. If you already have a pool dedicated for rbd usage, I would create one (or more) rbd images in required size:

rbd -p <pool> create -s <size> <name>

Then map that rbd device on the hypervisor, for automatic mapping after boot there's an example file within the /etc/ceph directory:

# cat /etc/ceph/rbdmap 
# RbdDevice             Parameters
#poolname/imagename     id=client,keyring=/etc/ceph/ceph.client.keyring

To enable the map after boot you need to enable the rbdmap service:

# systemctl enable --now rbdmap.service 
● rbdmap.service - Map RBD devices
   Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

When the rbd image is mapped to the hypervisor you should see it in lsblk output as an rbd device, or in /dev/rbd as well:

# lsblk 
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                  11:0    1  458K  0 rom  
rbd0                252:0    0   10M  0 disk

# ls -l /dev/rbd0
brw-rw---- 1 root disk 252, 0  9. Feb 12:18 /dev/rbd0

So from the hypervisor's perspective it's now a local disk which you can use to create storage pools.

Score:0
nl flag

While creating rbd storage pools in the Cockpit UI is currently impossible, I found a way of creating the pool for use with libvirt. The pool is properly displayed in the UI and I can also create new volumes there.

  1. Log in to your ceph admin node and create a new client token:

ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=mypool'

  1. Log in to your virtualization host (the machine managed with Cockpit) and configure the pool for libvirt.

virsh secret-define --file secret.xml virsh secret-set-value --secret UUID --base64 "$(ceph auth get-key client.libvirt)" virsh pool-define mypool.xml

  1. The pool should now be visible in the Cockpit UI, can be activated and new volumes can be created.

The configuration files should look similar to those:

<secret ephemeral='no' private='no'>
  <uuid>UID</uuid>
  <usage type='ceph'>
    <name>client.libvirt secret</name>
  </usage>
</secret>
<pool type="rbd">
  <name>mypool</name>
  <source>
    <name>mypool</name>
    <host name='CEPH_MON_IP'/>
    <auth username='libvirt' type='ceph'>
      <secret uuid='UUID'/>
    </auth>
  </source>
</pool>
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.