Score:0

Give user permission to read all directories and files

gl flag

I'm a newbie, trying to create an user in Ubuntu Server 22.04, with reading permissions to all existing directories and files, so it could backup everything copying them via SFTP to the backup server (that is a Windows Server 2019). I tried to apply capabilities(7) but I guess I'm doing it wrong, because the backup-user can't read directories and files that don't have "others" permissions (ex.: rwxrwx---). What am I doing wrong? Is there any other way to create an user with "read only" permissions to all files and directories in the system?

I created the user backup-user with:

sudo useradd backup-user -c "User to execute backups" -d /

And defined a password with:

sudo passwd backup-user

Then edited the file /etc/security/capability.conf with:

sudo nano /etc/security/capability.conf

Adding at the end of file the line:

cap_dac_read_search backup-user

Then logged as backup-user and tried:

cd /var/log/apache2

Receiving:

-sh: 1: cd: can't cd to /var/log/apache2

Also tried to add in the end of /etc/security/capability.conf, instead, the line:

cap_dac_override backup-user

But got the same results.

The permissions on /var/log/apache2 directory are:

drwxr-x---  root      adm  

When logged as backup-user, the result for capsh --print is:

Current: =
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read,cap_perfmon,cap_bpf,cap_checkpoint_restore
Ambient set =
Current IAB:
Securebits: 00/0x0/1'b0
 secure-noroot: no (unlocked)
 secure-no-suid-fixup: no (unlocked)
 secure-keep-caps: no (unlocked)
 secure-no-ambient-raise: no (unlocked)
uid=1004(backup-apesp) euid=1004(backup-apesp)
gid=1004(backup-apesp)
groups=1004(backup-apesp)
Guessed mode: UNCERTAIN (0)

When logged as a sudo user, the result for sudo capsh --print is:

Current: =ep
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read,cap_perfmon,cap_bpf,cap_checkpoint_restore
Ambient set =
Current IAB:
Securebits: 00/0x0/1'b0
 secure-noroot: no (unlocked)
 secure-no-suid-fixup: no (unlocked)
 secure-keep-caps: no (unlocked)
 secure-no-ambient-raise: no (unlocked)
uid=0(root) euid=0(root)
gid=0(root)
groups=0(root)
Guessed mode: UNCERTAIN (0)
cn flag
A full system backup is bad behaviour. Only backup personal files. If you need to restore a full system a reinstall is better, If you want to continue see for instance https://askubuntu.com/questions/7809/how-to-back-up-my-entire-system#7811 If you want to use tools that do this see https://askubuntu.com/questions/2596/comparison-of-backup-tools I would suggest to use something from the 2nd link
Cintya  avatar
gl flag
Yes, I researched a little more, and I think, since it's a webserver, backing up `/var` (where are the `www` and `log`) and `/etc` directories (where are some configurations) would be enough. Nevertheless, I still have permissions issues, since many of these files are `rx------- root root`, and I would not like to change the owners or permissions, so I'll try as @Spaceship Operations suggested..
cn flag
files in /var/www should all be set to the user and group set in apache config not to root. And no, root is not a good user for apache if you used that ;)
user535733 avatar
cn flag
That the other machine storing the backup is NOT a Linux machine is moderately important. That information should be in your question, so you get more useful answers. Honestly though, it's just a backup. It's not hard if you use the proper tools. There is no need to reinvent this wheel with clever users, permissions, etc.
Cintya  avatar
gl flag
The owner of `/var/www` is `www-data:www-data`, but for `/var/log` is `root:syslog`, and the files inside have the owner `root` or `syslog`, some with permission for the owner only (`-rw-------`). Since I was told that, for security reasons, the user should have only the permissions he needed, I though it would be inadequate to put `backup-user` in `sudo`, so I tried to give him permission to read all files so he could copy them to the backup location.
Score:1
cn flag

You can achieve this using Access Control Lists (ACLs), which allow you to grant extra file permissions to select users or groups without changing the owner or group of the file. (Credit goes to this answer.)

First of all, to get the dependency out of the way, ensure you have the setfacl command (e.g. just type it in a terminal) and if you not, install the acl package, which contains it.

Then, you can use setfacl to give backup-user the permission to access any files and directories that you want to backup, without having to change their owner or group:

backup_paths=(
   /home/jack
   /home/mary
   /var/log
)

for path in "${backup_paths[@]"; do
    setfacl -Rm backup-user:rwX "$path"
    setfacl -Rdm backup-user:rwX "$path"
done

A few notes about the setfacl invocations:

  • The -R switch applies the permission recursively.

  • The -m switch is simply required before the permission spec for whatever reason (See setfacl --help).

  • Observe the capital X in :rwX, which means "Allow this user to execute the file if the owner can execute it".

  • We have to invoke the command twice: Once without -d in order to change the permission for existing files and directories, and a second one with -d to change the default permission on directories, which causes this permission to be applied for any files/directories created under it in the future.

See also setfacl --help and man setfacl.

If you really want to allow backup-user to access everything, you can invoke the same two commands on / instead.

cn flag
copying files to another machine will mess up owner and groups. The only correct answer is to use an existing tool.
Spaceship Operations avatar
cn flag
The note at the end to invoke them on `/` exists because OP asked how to make the backup user "read all files", but unless I missed something while skimming his post, he didn't say he's backing up the entire system. Is your comment related to that? Or is there an inherent problem when using ACLs? I also want to point out that OP did not specify the backup method. e.g. `rsync` has options for copying the owner, group and permissions, but it does not copy the extended permissions assigned with `setfacl`. Unless ACL permissions are copied to the new machine somehow, your comment is irrelevant.
Spaceship Operations avatar
cn flag
And even if they *are* copied to the new machine, your comment is still irrelevant. Even the regular owner and group of a file do not map from one Linux installation to another. So if he's moving files from one machine to another at all, he'll be dealing with this problem no matter what he does, whether ACL permissions are used or not.
Cintya  avatar
gl flag
I think this will work! For the `/var/log` directory (that has a lot of `-rw------- root root` files) worked just fine! I just have to be less lazy, and select to which directories/files apply the ACL - for example, I noticed that I can't apply permissions to the whole `/etc` directory, because if it's applied to `/etc/ssh/ssh_host_rsa_key`, the SSH (and SFTP) connections would stop working (did `sudo sshd -t` received `"Permissions 0640 for '/etc/ssh/ssh_host_rsa_key' are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored."`).
Cintya  avatar
gl flag
Actually, I'm not sure if I am doing this backup right... It's a webserver (apache), and this would be my "plan B" backup ("plan A" are the virtual machine snapshots), and since it has a lot of content (near 1.5 TB) that don't change too much over time, I thought to mirror the content and config files to a network drive, comparing with FreeFileSync throught SFTP (the webserver is Ubuntu 22.04, but my backup server, and all my network, are Windows). I know the files/directories permissions will be lost but, as I said, it's a "plan B" - I hope I never need these files, but, just in case...
Spaceship Operations avatar
cn flag
@Cintya Since it seems like you're having trouble doing the backup, I added a new top-level answer containing a more or less full and easy solution using only rsync, SSH, and a suggested systemd service for automation.
Score:0
cn flag

Since your comment to the other top-level answer said that you're actually trying to mirror some server directories to another machine (with some difficulties), I'll write another solution that I think will get the job done fairly easily.

First of all, rsync might be the perfect tool for this. It can mirror entire directory trees while preserving their properties (owner, group, permissions, timestamps (and timestamps are important for some servers), etc.), contains options to selectively whitelist/blacklist files within the mirrored file tree, and performs the task as efficiently as possible (So if a backup has already been performed, it compares source and destination trees and only transmits new changes, so e.g. if the source and dest file are are identical in their timestamp and size, the transfer is skipped; and you can customize this behavior).

The most basic usage of rsync for backup could be as little as this:

rsync -ai SRC DEST

Where each of SRC and DEST can be a local directory, or a location in a remote mount. We'll get to these in a bit, but let's explain the switches first:

  • -a (which stands for --archive) is actually a shorthand for -rogptlD. The meaning of those switches is:

    • r: recursive
    • o: preserve owner
    • g: preserve group
    • p: preserve permissions
    • t: preserve timestamp
    • l: preserve symlinks as symlinks
    • D: preserve special files (devices/fifos/etc.) as special files
  • -i stands for "itemize changes", which prints a line for each transmitted/updated file or directory that beginning with a multi-column prefix that explains what is being updated. (The format of its output is out of the scope of this answer, but you can open man rsync and look for the --itemize-changes, -i section, which contains a full description of what these columns mean.)

  • The -m switch can also be used to prune (i.e. not backup) empty directories, but you might want to be careful using it if there are any empty directories that your server server requires to exist.

As you see, with just two switches, rsync is already performing probably 90% of the task. If you want to selectively mirror certain files under SRC, you can use one or more of these options:

  • --files-from=FILE: whitelist child paths (relative to SRC) listed in FILE. Nothing under SRC except the paths listed in FILE would be backed up. Each path is on a separate line, empty lines are ignored, and lines beginning with # are regarded as comments and ignored.

  • --exclude=PATTERN: glob pattern for excluding files, e.g. *.txt causes rsync to exclude all .txt files from the backup.

  • --exclude-from=FILE: read exclusion patterns from FILE, each on a separate line.

  • --include=PATTERN: overrides for exclusion patterns. For example if you use --exclude=*.txt --include=items.txt, then all .txt files would be excluded, except those named items.txt, which would be included. (Note that unlike --files-from, using --include does not automatically imply that everything else is excluded, other than those explicitly excluded with --exclude and --exclude-from.)

  • --include-from=FILE: like --include but read patterns from FILE

rsync has a myriad more options that you can check out with man rsync, but I mentioned these because they are the most likely to be needed in the most common backup tasks.

With that out of the way, now onto the "How to use rsync with a remote host" part.

If you are not fixated on using SFTP, to my understanding the preferred (and easiest) way for using rsync with remote machines is using SSH. Since SFTP is just a file transfer protocol over SSH, I'll assume that you are already able to SSH from the server to the backup machine (or vice versa). So using rsync over SSH would be as simple as:

# On the server
rsync -ai /path/to/src user@host:/path/to/dest

# Or on the backup machine
rsync -ai user@host:/path/to/src /path/to/dest

With that out of the way, the only thing left is ensuring that rsync can access the files and directories you want to backup on the server. Like I said, you can run rsync as root, and you can make that easier and safer (i.e. avoid using shell scripts) by creating a systemd service for it on the server.

# /etc/systemd/system/rsync-backup.service

[Unit]
Description=rsync backup service

[Service]
Type=oneshot
ExecStart=rsync -a --password-file=FILE SRC... user@host:DEST

Where FILE contains the password for SSH'ing to the remote (which you obviously should keep in a safe place that can only be read by root, e.g. save it in /root/password.txt and chmod go-rwx /root/password.txt).

Then you can create a timer to run it automatically everyday:

# /etc/systemd/system/rsync-backup.timer

[Unit]
Description=Run backup everyday

[Timer]
OnCalendar=daily
Accuracy=1min
Persistent=true

[Install]
WantedBy=timers.target

Then run systemctl daemon-reload to have systemd read the new unit files. Then you can enable the timer with:

systemctl enable rsync-backup.timer

This would perform the backup everyday, with rsync running as root and having permissions to access everything on the local machine, which to my understanding should be safe as long as you're not without relying on wacky shell scripts invoking external commands left and right as root, which would be a nightmare to secure. And rsync is only communicating over SSH to access the remote host, on which it doesn't need to have any special permissions, so it can SSH as a normal user. But if that fails for some reason, you can SSH as root too, though I'd recommend using a strong GPG key for SSH'ing rather than a password in that case.

Unless there is something I'm missing here, this sounds like a complete and fairly easy solution for your server backup needs.

Edit: Also, while I don't think it's going to be a solution to your server-backup-with-too-much-permissions problem, you might want to check out rclone, which is an even better solution than rsync for many use cases. It advertises itself as "The Swiss army knife of cloud storage". It supports many providers including Google Drive, Dropbox, Amazon, Azure, and general protocols like SFTP, SMB, WebDAV, etc., and has the ability to mount remotes from any of those providers/protocols as a normal directory, which allows you to access them just as if they were ordinary files on your filesystem, using any programs you want. The mounting does not require root (it's performed using FUSER), so any user can mount remote directories without sudo or any other special permissions.

A quick walk through its usage:

# List available backends
rclone help backends

# Start CLI wizard which walks you through
# the creation of a new remote:
rclone config

# There are many commands that you can use,
# such as `rclone {copy|move|sync} SRC DEST`,
# but probably the most intuitive way to use it
# is by mounting the remote as a local directory:
mkdir -p ~/Remotes/GDrive
rclone mount gdrive:/ ~/Remotes/GDrive
# This will run in the foreground so you should
# switch to another terminal window.

# Then you can access files on the remote just as if
# they were local files, using any program you want:
cd ~/Remote/GDrive
ls  # print list of files on the remote

# Copy/download files from the remote to your machine
cp -v -- *.png ~/Pictures

# Copy/upload files from your machine to the remote
cp -v -- ~/Music/*.ogg .

touch NEW-FILE   # Create new file
vim script.py    # Edit new/existing file

# Browse remote with graphical file manager
dolphin . &

# etc.

To unmount the remote, simply return to the terminal where rclone mount is running, and kill it with Ctrl-C. Or if you spawned it in the background, you can kill it with killall rclone.

Cintya  avatar
gl flag
Thanks, this is very useful! I didn't know about `rclone`, but I've read about `rsync`, just was avoiding using it to not "overload" the Ubuntu server with this task (compare 1.5 TB of files), using instead the FreeFileSync on the backup server (that is a Windows Server dedicated only to do backups, as we have more than 20 servers, some Windows, some Ubuntu, some CentOS, and need to backup tem all), but I think I'll end up using `rsync` or `rclone` with the `backup-user` as `sudo`, even though people telling me it's not the safest thing to do.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.