I have LVM RedHat 8 intall and there's one Volume Group there.
Fri May 19 [michal@Ora2 ~]$ sudo vgdisplay
--- Volume group ---
VG Name ol
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 3
Act PV 3
VG Size 38.40 GiB
PE Size 4.00 MiB
Total PE 9831
Alloc PE / Size 9781 / <38.21 GiB
Free PE / Size 50 / 200.00 MiB
VG UUID mzZcM4-Vrb5-nUE7-PB53-Bj3P-HCvq-GkWL31
There are 3 Logical Volumes within this Volume Group:
Fri May 19 [michal@Ora2 ~]$ sudo lvdisplay
--- Logical volume ---
LV Path /dev/ol/swap
LV Name swap
VG Name ol
LV UUID 48Urw2-aX0n-OOub-oi88-otti-Mm8w-NTp6Wg
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2023-03-20 12:44:44 +0100
LV Status available
# open 2
LV Size 2.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 252:1
--- Logical volume ---
LV Path /dev/ol/root
LV Name root
VG Name ol
LV UUID cmQRKE-r65P-lEDL-NIDe-WjII-fPW0-r8N5Cm
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2023-03-20 12:44:45 +0100
LV Status available
# open 1
LV Size <26.41 GiB
Current LE 6760
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 252:0
--- Logical volume ---
LV Path /dev/ol/data_lv
LV Name data_lv
VG Name ol
LV UUID k3R38o-DcYz-OMzq-tnwl-09xb-zWaX-0OUmQg
LV Write Access read/write
LV Creation host, time Ora2.localdomain, 2023-03-30 13:46:44 +0200
LV Status available
# open 1
LV Size 9.80 GiB
Current LE 2509
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 252:2
Also have a look here:
Fri May 19 [michal@Ora2 ~]$ sudo lsblk -pf
NAME FSTYPE LABEL UUID MOUNTPOINT
/dev/sda
├─/dev/sda1 vfat 72D7-4159 /boot/efi
├─/dev/sda2 xfs 45d85da6-d982-4603-a178-ef25e2e568b3 /boot
└─/dev/sda3 LVM2_member SkBfps-Vnoa-Rfh9-4bHC-rzU0-Pq8q-vvL2Of
├─/dev/mapper/ol-root xfs 5fb3c584-505d-4ebb-a829-e9faa398c5bf /
└─/dev/mapper/ol-swap swap 48078e80-1a9f-47cc-9b7d-c0c56c269cbe [SWAP]
/dev/sdb LVM2_member YXL7yT-DV3f-rpOn-T7sk-71ar-MFht-2RmEHD
└─/dev/mapper/ol-root xfs 5fb3c584-505d-4ebb-a829-e9faa398c5bf /
/dev/sdc LVM2_member c3jhKi-3r06-sSoj-wOnv-DHAO-BJEk-CsCQKO
└─/dev/mapper/ol-data_lv xfs 5c357c77-57ac-48b9-bf91-dd117645c17e /DATA
/dev/sr0
And here:
Fri May 19 [michal@Ora2 ~]$ sudo df -Th
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 820M 0 820M 0% /dev
tmpfs tmpfs 840M 0 840M 0% /dev/shm
tmpfs tmpfs 840M 8.7M 832M 2% /run
tmpfs tmpfs 840M 0 840M 0% /sys/fs/cgroup
/dev/mapper/ol-root xfs 17G 3.9G 13G 24% /
/dev/sda2 xfs 1014M 514M 501M 51% /boot
/dev/sda1 vfat 599M 5.1M 594M 1% /boot/efi
/dev/mapper/ol-data_lv xfs 9.8G 102M 9.7G 2% /DATA
tmpfs tmpfs 168M 0 168M 0% /run/user/1000
I want to delete the file system located on Logical Volume called /dev/ol/data_lv
, mounted on /dev/mapper/ol-data_lv
.
What mapper
means??
There are few, 3 or 4 xfs
file systems on my server.
What is the way 2 point rmfs
to the right file system to delete??
In the sources I've been using, like this one there's usually only info on how to create filesystems on the Logical Volume or like here, it's written to execute rmfs xfs
, but it's not explained, how to point this rmfs
command to erase the specific xfs
if there are 3 of them, like in my case. One of my xfs
is root, so it's important 4 me, to point the rmfs
towards the right file system - /dev/ol/data_lv
.
Other sources I've found, like this one, point the erasing command, towards the partition wipefs -a /dev/sda1
. Well the xfs
I need 2 delete sits on a Logical Volume, not on the partition, so this also is not enough the execute my command safely.
What are the steps to delete my xfs
from /dev/ol/data_lv
Logical Volume, and how to point the erasing command towards the specific xfs
, if there's more than one on my server??
I don't want to delete the logical volume. I want to erase the current file system in order to create 2 smaller file systems on that same Logical Volume.
It is not important to safely and permanently remove the existing data. This thing is not the reason, that I want to erase already existing file system.