Score:0

lvm cachepool disabled after single error on raid1 - is this by design? or coincidence?

id flag

I setup my computer as follows

Fedora 37 kernel 6.0.17-300.fc37.x86_64
lvm2-2.03.11-9.fc37.x86_64
sda - SSD
sdb - SSD
sdc - HDD
sdd - HDD
lvcreate -n root -L50G --type=raid1 rootvg sdc sdd
lvcreate -n rootcache -L10G --type=raid1 rootvg sda sdb
lvcreate -n rootmeta -L20M --type=raid1 rootvg sda sdb
lvconvert --type cache-pool --cachemode writeback --poolmetadata rootmeta rootvg/rootcache
lvconvert --cache --cachepool rootcache rootvg/root

sdb failed and I'm getting the usual "Couldn't find device with uuid" errors

So I thought it would be a good idea to disable caching because it's writeback without a mirror, but it's already disabled ???

lvs -a shows ...

  root                                      rootvg Cwi-aoC-p-  50.00g [root-cache_cpool]         [root-f37_corig]     5.83   6.58                 0.00            
  [root_corig]                              rootvg rwi-aoC---  50.00g                                                                             100.00          
  [root_corig_rimage_0]                     rootvg iwi-aor---  50.00g                                                                                             
  [root_corig_rimage_1]                     rootvg iwi-aor---  50.00g                                                                                             
  [root_corig_rmeta_0]                      rootvg ewi-aor---   4.00m                                                                                             
  [root_corig_rmeta_1]                      rootvg ewi-aor---   4.00m                                                                                             
  [root-cache_cpool]                        rootvg Cwi---C-p-  10.00g                                                 5.83   6.58                 0.00            
  [root-cache_cpool_cdata]                  rootvg Cwi-aor-p-  10.00g                                                                             100.00          
  [root-cache_cpool_cdata_rimage_0]         rootvg iwi-aor---  10.00g                                                                                             
  [root-cache_cpool_cdata_rimage_1]         rootvg Iwi-aor-p-  10.00g                                                                                             
  [root-cache_cpool_cdata_rmeta_0]          rootvg ewi-aor---   4.00m                                                                                             
  [root-cache_cpool_cdata_rmeta_1]          rootvg ewi-aor-p-   4.00m                                                                                             
  [root-cache_cpool_cmeta]                  rootvg ewi-aor-p-  20.00m                                                                             100.00          
  [root-cache_cpool_cmeta_rimage_0]         rootvg iwi-aor---  20.00m                                                                                             
  [root-cache_cpool_cmeta_rimage_1]         rootvg Iwi-aor-p-  20.00m                                                                                             
  [root-cache_cpool_cmeta_rmeta_0]          rootvg ewi-aor---   4.00m                                                                                             
  [root-cache_cpool_cmeta_rmeta_1]          rootvg ewi-aor-p-   4.00m                

If I look at LV status with lvdisplay, I see that root, root-cache_cpool_cdata & rootvg/rootcache are available

but rootvg/rootcache is NOT available

# lvdisplay -a rootvg/root-cache_cpool 2>/dev/null 
  --- Logical volume ---
  Internal LV Name       root-cache_cpool
  VG Name                rootvg
  LV UUID                tb9pyc-ARFp-z2PZ-1OA1-SaZB-cRDw-HFDmGL
  LV Write Access        read/write
  LV Creation host, time pru-ch.internal.booths.org.uk, 2023-01-09 11:13:04 +0100
  LV Pool metadata       root-cache_cpool_cmeta
  LV Pool data           root-cache_cpool_cdata
  LV Status              NOT available
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

Is this feature by design to protect data when a writeback cache is no longer mirrored, or is this just a lucky coincidence?

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.