Questions tagged as ['filesystems']

I want to prevent a user from seeing a list of home directory(of other users). By default, a user can not access other user's home dir but can find another user's home dir like below:
[opc@instance-20210712-0826 home]$ cd /home
[opc@instance-20210712-0826 home]$ ls -lh
total 8.0K
drwx------. 10 opc opc 4.0K Nov 14 22:52 opc
drwx------. 2 otheruser otheruser 62 Nov 28 18:19 otheruser
...

Linux systems sometimes remount the root file system as read-only, e.g. if there's an I/O error.
I have a machine that becomes useless when this happens, and I end up rebooting it manually.
Is there a way to make Linux just automatically reboot when this happens? A read-only mount is useless to me.
I have an application that uses a lot of space as essentially cache data. The more cache available the better the application performs. We're talking hundreds to thousands of TB. The application can regenerate the data on-the-fly if blocks go bad, so my primary goal is to maximize the size available on my filesystem for cache data, and intensely minimize the filesystem overhead.
I'm willing to sa ...
Currently, I'm managing a back-up service for multiple remote servers. Backups are written trough rsync, every back-up has it's own filecontainer mounted as a loop device. The main back-up partition is an 8T xfs formatted and the loop devices are between 100G and 600G, either ext2 or ext4 formatted. So, this is the Matryoshka-like solution simplified:
df -Th
> /dev/vdb1 xfs 8,0T /mnt/back ...
I'm trying to fit as many 108 GB files on a 16 TB as possible. If I format the drive for ext4 with default options I can fit 138 files on it.
But if I do
mkfs.ext4 -m 0 -T largefile4 /dev/xxx
146 files will fit on.
What file system and options can I use to maximize the utilizable space? It is for read only access and all the files have the same size of 108 GB.

As far as I understand WAL in PostgreSQL designed for control of the integrity of Database. On File System same purpose is served by CoW (Copy-on-Write) mechanism.
So WAL look like some overhead. So can it be safely turned off? After all, the integrity of data can provide the file system itself.

How to umount this cgroup?
Also i have no idea is it important or not. It seem cgroup is comefrom docker but im still not sure.
I was trying install gns3-remote
And it give me cgroup like this.
Then i deleted gns3-remote because it's for fresh vps not my private vps. I dont need it anymore. Also im still not sure if i clearly remove it
Is it okay keep this cgroup? Im just annoying seeing this files ...

My ext4 filesystem loses performance when growing.
I have a system storing a lot of image files. This Debian based image server stores image files divided in year folders on 1-2TB disk sets with hardware RAID-1.The files is stored in a structure of year folders and two levels of 256 folders below that.
Like
images/2021/2b/0f/193528211006081503835.tif
The are files are written continuously during t ...

Our system admin said, they had mounted a 100GB disk to EC2 instance. But when I put a df -Ph
, only 2.8GB is available to /
root
root@myhost: df -Ph /
Filesystem Size used Avail Use% Mounted On
/dev/nvme0n1p2 2.8G 2.5G 366M 88% /
If I put fdisk -l
or lsblk
, the disk is present of 100G
root@myhost: lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 100G 0 disk
|
|- nvme0n1p1 259:1 0 ...

I have a linux software Raid5 array (md1), containing 4 x 16TB + 2 x 8TB hard drives. 2 x 8TB hard drives were merged together (Raid0 array; md0), working as a (fifth) 16TB device. This is just for data storage. Since the 2 x 8TB needed to me removed, I decided to shrink the number of devices to 4. Therefore I performed following steps:
mdadm --grow /dev/md1 --array-size 46883175936
mdadm --grow ...
I'm having the next issue.
I have a shared directory "/home/shared/users" with more users (let's say like an example user1 user2 )
Here in /home/shared/users an script generate files (let's say file1 and files 2
drwx-xr-x admin administrator /home/shared/user - have this permission and is owned by user admin and group administrator
User1 and user 2 aren't included in administrator group
In cront ...

Is there a way I can increase the blocksize of my filesystem (ext4) beyond the 4KB limit?
I want to increase the random read/write speed of my filesystem by increasing the block size but I can't go any further than the page size. Is there a way to work around this?
I would appreciate any instructions or help to make this change. Thank you.

I have a single SAN with two virtual drives. (i.e., they are separate mounts, but they are mapped to the same IP address) For example, if I do ls /dev/disk/by-path/
, I see this:
ip-172.16.100.5:3260-iscsi-iqn.[all same]-lun-0@
ip-172.16.100.5:3260-iscsi-iqn.[all same]-lun-1@
ip-172.16.100.6:3260-iscsi-iqn.[all same]-lun-0@
ip-172.16.100.6:3260-iscsi-iqn.[all same]-lun-1@
(There are two entries fo ...
Yesterday our server (Ubuntu 18.04) reached 100% storage capacity
and set one of our filesystems to read-only mode, see:
/dev/md3 / ext4 ro,relatime,errors=remount-ro,data=ordered 0 0
. I've tried several solutions from other answers on serverfault, but none seem to fit my case.
For example, I've tried to execute the following command: sudo mount -o remount,rw /dev/md3 /
, but that results in the ...

I have a problem with an Oracle Cluster Filesystem (ocfs2) attached to a cluster of Ubuntu 20.04 servers. The file system keeps getting mounted as read-only. Unfortunately, I know the cause since the system was rebooted in the middle of a file copy. I'm not surprised there are issues with some of the files but I would even be happy to delete all of these files and re-copy, but the file system keeps g ...

I'm wanting to share data among multiple AWS instances in a high-performance, low-latency manner. Giving all instances read-only access (except one instance that would handle writes) is fine. Two points about this use-case:
- Nodes attached to the volume might come and go at any time (start, stop, be terminated, etc).
- Shared data includes 1000s of potentially small files that need to be listed and have ...

This is a ''general'' question. Hear me out.
Let's say I have a MySQL standalone or even a 3 or 5 nodes cluster. Would it be a good practice to have 1 filesystem per schema ?
For example, schema{1..5} would go in /var/lib/mysql/data/schema{1..5}
And I am not talking about RAID level under theses filesystems here... Just, plain FS. Let assume I use XFS here.
What gain would I potentially gain from it ?
...
I have recently purchased around 125 Wordpress sites. They're all based around the Avada theme but have a customised child theme. FYI (from what I've learned) child themes are basically a directory into which override files are placed to change the behaviour of a parent theme. For example if I wanted to change an aspect of the header, I'd copy the header.php
from the parent theme into the child theme ...

Today I needed to remove a file, but I couldn't:
[capv@TKG-VC-ANTREA-M]: C:\Users\capv> rm 'C:\Program Files\containerd\containerd-shim-runhcs-v1.exe'
rm : Cannot remove item C:\Program Files\containerd\containerd-shim-runhcs-v1.exe: Access to the path 'C:\Program Files\containerd\containerd-shim-runhcs-v1.exe' is denied.
Oddly, I found a workaround: Just mv
it instead, and it worked.
[capv@TKG-VC-AN ...

We received a ton of files from our sponsor and the files are all formatted like this
[ABCD] Title - Id - Description [RS][x264][CHKSUM].txt
I could manually rename one at a time but there are more than 500 files that are sent on a weekly basis.
RS - Reviewer Signature (usually the same person) CHKSUM - for the file or something.
What I need is the following
Title - Id - Description.txt
I need t ...

There is a created years ago(and many time resized from that times) filesystem with ext4. After power failure it stop to mount. When i try to mount it manually i receive an error:
# mount /dev/space/vservershosting-vs /mnt/
mount: /mnt: mount(2) system call failed: Structure needs cleaning.
In dmesg there is more information:
[32618.800854] EXT4-fs error (device dm-44): __ext4_iget:5080: inode #2 ...
A really strange confluence of dependencies led to an odd database crash. Context:
- CentOS 7.9
- LVM2-2.02
- Postgresql 12, with data volume on an LVM volume, XFS formatted
- systemd
- dbus
While the system was up and stable, and the database running as normal, I performed a yum update
. During the update, several volumes were unmounted, including the one the database was mounted on. This resulted in the panic ...

I ran the following command on a BTRFS volume in order to convert the data profile from single to DUP (I'm looking to provide duplicate data on this volume to be able to repair possible bit rot corruption):
sudo btrfs balance start -dconvert=dup Volume
However, now when I run btrfs filesystem df Volume
I get two Data entries, one with the original single profile and one with the new DUP profile:
I'm mounting a test server to a shared filesystem at work. It's a cifs mount so im looking at this reference page: https://linux.die.net/man/8/mount.cifs
I want to try and mount in a "know as little as possible" manner to keep people from fudging with the shared filesystem from a test server. So in the docs I see:
uid=arg sets the uid that will own all files or directories on the mounted filesystem wh ...
I've been working on my server all day, doing various things. I know, unquestioningly (found the evidence by scrolling up in one of my terminal sessions) that I had about 900GB of space 4 hours ago. It's been about that the last few days.
Now, I've noticed it's 1200GB.
I'm as certain as I can be that I've not accidentally (or intentionally) deleted 300GB of files. But I'm scared.
Is there a rational ...
I'm running Ubuntu 20.04. I have a directory with million of files named like this
master-stdout.log.20210801.024908
master-stdout.log.20210801.025524
master-stdout.log.20210801.064355
How can I delete all of master-stdout.log files?
I'm looking for a way to monitor when a file/folder is moved, as well as where it was moved to.
So far in my research I've come across tools such as auditd
, watch
and inotify
. While these tools are great at monitoring when a file moves, they don't keep track of where the file moved to.
I have also looked at the syslogs generated when a file is moved but they are painful to read/parse.
Are there any tool ...
I have created two directory for new build linux server & each directory is 200GB in size and will be used for DB . security team did scan they found vulnerability as " No nodev option set to directory". I tried with ""mount -o remount,nodev /mountpoint"" but nodev is not being added in /etc/fstab .
Is it good to add nodev option to DB folder and how I can do this ??
Will unmount the director ...

On our CentOS 7.3.1611 system with installed MariaDB, httpd and Postfix the partition /dev/mapper/centos_srv01-root
gets with more time more & more full.
We also recursively searched on the root directory for all files over 100MB. However, there were no differences between the two days. Although /dev/mapper/centos
I am on Cent OS 7
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VER ...