I've deployed openstack and used LVM as a backend for the cinder block storage, but after using it for a while I could no longer allocate block storage volumes , after checking the available space in the thin pool I found out that it shows 100% space usage even though the space allocated for it is more than 20 times the actual space used by logical volumes.
here's the output of lvs command
[root@storage ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
_snapshot-0feb1dde-f982-4723-9efa-61d696e90403 cinder-volumes Vwi---tz-k 20.00g cinder-volumes-pool volume-1cc5f7f5-4458-4e0a-9ef7-f32d7bbaed55
_snapshot-69d31469-c83c-4933-93f6-fa7f5a9e4a90 cinder-volumes Vwi---tz-k 120.00g cinder-volumes-pool volume-595f596c-828c-4631-b366-19d77878ff97
_snapshot-7b20dd20-5255-4bf9-8986-10abcc8ccde1 cinder-volumes Vwi---tz-k 120.00g cinder-volumes-pool volume-b51e6360-8e23-4c4b-84d1-5af47db77836
_snapshot-7bf3c7a6-d394-478e-81d8-ef8147f3bd7a cinder-volumes Vwi---tz-k 70.00g cinder-volumes-pool volume-3dac8bc6-4c92-42c4-9939-f5cfa6ea4f64
_snapshot-9f45bb3b-b812-4c0a-a81e-39fc4ac7afb3 cinder-volumes Vwi---tz-k 70.00g cinder-volumes-pool volume-3dac8bc6-4c92-42c4-9939-f5cfa6ea4f64
_snapshot-a24a466d-57b5-46dd-bc4f-86780ad2743d cinder-volumes Vwi---tz-k 120.00g cinder-volumes-pool volume-595f596c-828c-4631-b366-19d77878ff97
_snapshot-b29a608b-ade2-4df7-b193-00263e1b8ce6 cinder-volumes Vwi---tz-k 70.00g cinder-volumes-pool volume-2912e1a8-0a09-45a8-9cc4-8cb0ee004af1
_snapshot-bbf2d3cd-f01a-467a-ada8-866fab64c2e2 cinder-volumes Vwi---tz-k 20.00g cinder-volumes-pool volume-1cc5f7f5-4458-4e0a-9ef7-f32d7bbaed55
cinder-volumes-pool cinder-volumes twi-aotzD- <20.74t 100.00 12.90
volume-0225be58-0db4-4278-8453-7da3b5785e45 cinder-volumes Vwi-aotz-- 40.00g cinder-volumes-pool 75.00
volume-15275ca8-775a-455d-b727-b54b81edb352 cinder-volumes Vwi-aotz-- 50.00g cinder-volumes-pool 0.28
volume-1cc5f7f5-4458-4e0a-9ef7-f32d7bbaed55 cinder-volumes Vwi-aotz-- 20.00g cinder-volumes-pool 44.77
volume-2912e1a8-0a09-45a8-9cc4-8cb0ee004af1 cinder-volumes Vwi-aotz-- 70.00g cinder-volumes-pool 51.74
volume-32c29bc1-ca17-4f94-8fdf-ffe2e3e542df cinder-volumes Vwi-aotz-- 120.00g cinder-volumes-pool _snapshot-7b20dd20-5255-4bf9-8986-10abcc8ccde1 73.02
volume-3dac8bc6-4c92-42c4-9939-f5cfa6ea4f64 cinder-volumes Vwi-aotz-- 70.00g cinder-volumes-pool _snapshot-b29a608b-ade2-4df7-b193-00263e1b8ce6 49.46
volume-595f596c-828c-4631-b366-19d77878ff97 cinder-volumes Vwi-aotz-- 120.00g cinder-volumes-pool _snapshot-b29a608b-ade2-4df7-b193-00263e1b8ce6 44.30
volume-64785733-1c7c-4fdc-90fa-78fb24ef2ff6 cinder-volumes Vwi-aotz-- 50.00g cinder-volumes-pool 78.22
volume-6872a083-2d3a-40ed-af95-13c48234415a cinder-volumes Vwi-aotz-- 120.00g cinder-volumes-pool _snapshot-b29a608b-ade2-4df7-b193-00263e1b8ce6 83.33
volume-b51e6360-8e23-4c4b-84d1-5af47db77836 cinder-volumes Vwi-aotz-- 120.00g cinder-volumes-pool _snapshot-b29a608b-ade2-4df7-b193-00263e1b8ce6 96.22
volume-b5eb1da3-105a-43c9-a507-967e111eab20 cinder-volumes Vwi-aotz-- 70.00g cinder-volumes-pool _snapshot-9f45bb3b-b812-4c0a-a81e-39fc4ac7afb3 83.77
volume-bc16201e-790a-4c48-a926-b4105e3756cf cinder-volumes Vwi-aotz-- 40.00g cinder-volumes-pool 75.20
volume-eb26a41d-4b79-4f2b-8062-991f6c5c9531 cinder-volumes Vwi-aotz-- 120.00g cinder-volumes-pool _snapshot-a24a466d-57b5-46dd-bc4f-86780ad2743d 44.30
volume-f3cacccb-32f0-4964-92b5-2310939b8cdb cinder-volumes Vwi-aotz-- 70.00g cinder-volumes-pool _snapshot-7bf3c7a6-d394-478e-81d8-ef8147f3bd7a 49.46
and lvdisplay
[root@storage ~]# lvdisplay
--- Logical volume ---
LV Name cinder-volumes-pool
VG Name cinder-volumes
LV UUID v7oSRQ-0JT4-qyay-jQyO-9Cme-DM63-TFjamv
LV Write Access read/write (activated read only)
LV Creation host, time storage, 2022-12-07 16:41:52 +0100
LV Pool metadata cinder-volumes-pool_tmeta
LV Pool data cinder-volumes-pool_tdata
LV Status available
# open 0
LV Size <20.74 TiB
Allocated pool data 100.00%
Allocated metadata 12.90%
Current LE 5436250
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 65536
Block device 253:7
the thin pool is used from an LVM volume group named cinder-volumes with a size of 21TB
[root@storage ~]# vgdisplay
--- Volume group ---
VG Name cinder-volumes
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 633
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 23
Open LV 14
Max PV 0
Cur PV 1
Act PV 1
VG Size <21.83 TiB
PE Size 4.00 MiB
Total PE 5722367
Alloc PE / Size 5436292 / <20.74 TiB
Free PE / Size 286075 / 1.09 TiB
VG UUID gJ4XjL-7oI7-vY0n-mVAo-pjcP-7cc3-ZvddxI
how can I fix this without having to lose any consistent data.