Score:1

how to delete a xfs file system from logical volume (lvm2), to achieve the goal being this logical volume without any file system?

br flag

I have LVM RedHat 8 intall and there's one Volume Group there.

Fri May 19 [michal@Ora2 ~]$ sudo vgdisplay
  --- Volume group ---
  VG Name               ol
  System ID
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               38.40 GiB
  PE Size               4.00 MiB
  Total PE              9831
  Alloc PE / Size       9781 / <38.21 GiB
  Free  PE / Size       50 / 200.00 MiB
  VG UUID               mzZcM4-Vrb5-nUE7-PB53-Bj3P-HCvq-GkWL31

There are 3 Logical Volumes within this Volume Group:

Fri May 19 [michal@Ora2 ~]$ sudo lvdisplay
  --- Logical volume ---
  LV Path                /dev/ol/swap
  LV Name                swap
  VG Name                ol
  LV UUID                48Urw2-aX0n-OOub-oi88-otti-Mm8w-NTp6Wg
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2023-03-20 12:44:44 +0100
  LV Status              available
  # open                 2
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:1

  --- Logical volume ---
  LV Path                /dev/ol/root
  LV Name                root
  VG Name                ol
  LV UUID                cmQRKE-r65P-lEDL-NIDe-WjII-fPW0-r8N5Cm
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2023-03-20 12:44:45 +0100
  LV Status              available
  # open                 1
  LV Size                <26.41 GiB
  Current LE             6760
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:0

  --- Logical volume ---
  LV Path                /dev/ol/data_lv
  LV Name                data_lv
  VG Name                ol
  LV UUID                k3R38o-DcYz-OMzq-tnwl-09xb-zWaX-0OUmQg
  LV Write Access        read/write
  LV Creation host, time Ora2.localdomain, 2023-03-30 13:46:44 +0200
  LV Status              available
  # open                 1
  LV Size                9.80 GiB
  Current LE             2509
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:2

Also have a look here:

Fri May 19 [michal@Ora2 ~]$ sudo lsblk -pf
NAME                     FSTYPE      LABEL UUID                                   MOUNTPOINT
/dev/sda
├─/dev/sda1              vfat              72D7-4159                              /boot/efi
├─/dev/sda2              xfs               45d85da6-d982-4603-a178-ef25e2e568b3   /boot
└─/dev/sda3              LVM2_member       SkBfps-Vnoa-Rfh9-4bHC-rzU0-Pq8q-vvL2Of
  ├─/dev/mapper/ol-root  xfs               5fb3c584-505d-4ebb-a829-e9faa398c5bf   /
  └─/dev/mapper/ol-swap  swap              48078e80-1a9f-47cc-9b7d-c0c56c269cbe   [SWAP]
/dev/sdb                 LVM2_member       YXL7yT-DV3f-rpOn-T7sk-71ar-MFht-2RmEHD
└─/dev/mapper/ol-root    xfs               5fb3c584-505d-4ebb-a829-e9faa398c5bf   /
/dev/sdc                 LVM2_member       c3jhKi-3r06-sSoj-wOnv-DHAO-BJEk-CsCQKO
└─/dev/mapper/ol-data_lv xfs               5c357c77-57ac-48b9-bf91-dd117645c17e   /DATA
/dev/sr0

And here:

Fri May 19 [michal@Ora2 ~]$ sudo df -Th
Filesystem             Type      Size  Used Avail Use% Mounted on
devtmpfs               devtmpfs  820M     0  820M   0% /dev
tmpfs                  tmpfs     840M     0  840M   0% /dev/shm
tmpfs                  tmpfs     840M  8.7M  832M   2% /run
tmpfs                  tmpfs     840M     0  840M   0% /sys/fs/cgroup
/dev/mapper/ol-root    xfs        17G  3.9G   13G  24% /
/dev/sda2              xfs      1014M  514M  501M  51% /boot
/dev/sda1              vfat      599M  5.1M  594M   1% /boot/efi
/dev/mapper/ol-data_lv xfs       9.8G  102M  9.7G   2% /DATA
tmpfs                  tmpfs     168M     0  168M   0% /run/user/1000

I want to delete the file system located on Logical Volume called /dev/ol/data_lv, mounted on /dev/mapper/ol-data_lv.

What mapper means??

There are few, 3 or 4 xfs file systems on my server.
What is the way 2 point rmfs to the right file system to delete??

In the sources I've been using, like this one there's usually only info on how to create filesystems on the Logical Volume or like here, it's written to execute rmfs xfs, but it's not explained, how to point this rmfs command to erase the specific xfs if there are 3 of them, like in my case. One of my xfs is root, so it's important 4 me, to point the rmfs towards the right file system - /dev/ol/data_lv.

Other sources I've found, like this one, point the erasing command, towards the partition wipefs -a /dev/sda1. Well the xfs I need 2 delete sits on a Logical Volume, not on the partition, so this also is not enough the execute my command safely.

What are the steps to delete my xfs from /dev/ol/data_lv Logical Volume, and how to point the erasing command towards the specific xfs, if there's more than one on my server??

I don't want to delete the logical volume. I want to erase the current file system in order to create 2 smaller file systems on that same Logical Volume.
It is not important to safely and permanently remove the existing data. This thing is not the reason, that I want to erase already existing file system.

Romeo Ninov avatar
in flag
DO you want after erasure to create new filesystem on the LV?
Nikita Kipriyanov avatar
za flag
`dd if=/dev/zero of=/dev/mapper/vgname-lvname` will fill LV with zeros, erasing everything from it. More intelligent is `wipefs -af /dev/mapper/vgname-lvname`. But in most cases this is still redundant step. Just unmount file system and use the volume for whatever you want to use it; any such use will overwrite, e.g. remove the file system.
michal roesler avatar
br flag
I don't want to delete the logical volume. I want to erase the current file system in order to create 2 smaller file systems on that same Logical Volume. It is not important to safely and permanently remove the existing data. This thing is not the reason, that I want to erase already existing file system.
Nikita Kipriyanov avatar
za flag
You can't create two file systems on a single logical volume (technically you can but this is very unusual). You need to remove the volume and create two volumes on its former place. You only need to unmount the volume and then remove it; you don't need to do any specific things about file systems that was on the removed volume because it will be destroyed when you create new volumes and new file systems on it.
Score:2
za flag

As it was clear from the comment, it's not a "file system" you need to remove, but you want to reclaim a space in the volume group to use it for something else.

For that, you need not to remove a file system but a logical volume it's residing on.

And no, it's not "mounted on /dev/mapper/ol-data_lv", it's not a mount point but an actual device node of the logical volume. Normally there are two device nodes pointing to the volume (another one is /dev/ol/data_lv), one is symlink to another, but that shouldn't bother you, you can use both of them interchangeably. This volume is mounted on /DATA, as evident from your df output.

First of all, umount it:

umount /dev/mapper/ol-data_lv

It will refuse if it is currently used by anything, in which case the command will display an error. To get rid of that you need to identify which programs/processes use it and terminate them. You won't be able to proceed until you've done that. One way to identify who uses it is to run lsof /DATA.

Here is a point of no return, you can't go back after you perform following operations. When you unmounted the volume, remove it:

lvremove ol/data_lv

Then, create new volumes in the group; if these are large enough they will take portions of the space previously occupied with the removed volume:

lvcreate -L5G -n new_lv_1 ol
lvcreate -l1024 -n another_lv ol

-L specifices the size in bytes of the new volume, with binary suffixes, so "5G" here means 5 GiB (5120 MiB). The -n speficies new volume name. You can also use -l to specify sizes in terms of extents, 4MiB in your case, so second volume is going to be exactly 4GiB; that way is possible to fill the group exactly, but you need to know exactly how many extents you want to use.

The commands above will create device nodes (again, in pairs): /dev/ol/new_lv_1 (with its twin /dev/mapper/ol-new_lv_1) and /dev/ol/another_lv (together with /dev/mapper/ol-another_lv). These are two your new volumes; you create new file systems on them:

mkfs.xfs /dev/mapper/ol-new_lv_1
mkfs.xfs /dev/ol/another_lv

(as you can see, you can use any of the aliases to refer to the volume; it will work exactly the same way).

And then create mountpoints and update /etc/fstab for it to mount automatically.

Score:2
cn flag

As I understand it, you just want to delete the logical volume /dev/mapper/ol-data_lv

If that's what you want so that the space there is available, then there is no need to wipe or erase the data from it. If there is a reason that it needs to be securely deleted, then that's a different question.

In your case, just to make sure that there are no processes using that LV which can interfere in some cases, comment out the line in /etc/fstab to mount /dev/mapper/ol-data_lv on /DATA (or remove anything else that might mount it at boot, reboot the system, and when it comes back up, that LV and its filesystem won't be in use. You can simply delete it with:

lvremove /dev/mapper/ol-data_lv

The space that it occupied will then be free with no filesystem on it. You can verify afterwards with

vgdisplay

And you'll see the space that used to be occupied in the following row:

`Free  PE / Size`
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.