Score:10

How can I perform a secure automatic file transfer between two servers?

us flag

I would like to transfer some files from a server A to a server B daily (for backup purposes). However, I cannot find a way that does not create security breaches. My goal is that someone with sudo rights on server A cannot exploit this transfer to connect to server B.

My base idea was to do a cronjob with a scp (or similar) command in it. Obviously, using a password-based SSH connection between A and B does not work, and using a key-based SSH connection would, as far as I know allow a user of server A to connect directly to B via A.

I'm no security expert, I may be missing the obvious here. Is there a way to achieve what I want?

I do not want users of server B to be able to connect to server A either.

pa4080 avatar
cn flag
I would fetch the file via SSH from Server B, instead of pushing it from Server A, by something as `user@serverB:~/: rsync -avz serverA:/path/to/file /local/path/to/store/`. Another way is to setup a third instance that is able to login via SSH to the both servers and use `scp` to copy directly from A to B (this is not supported by `rsync`).
us flag
I should have added that I do not want the users of server B to be able to connect to server A either. However, I like the third server approach
Ubuntu User avatar
ph flag
What you could do is encrypt your data before sending it over with the gpg command... then decrypt on the other end, if you manually input the passwords and don't have them written down somewhere sudo can't access it. However, if you want this done automatically you might need something else.
user535733 avatar
cn flag
The problem is not "*secure automatic file transfer*". That's easy. The problem is "*someone with sudo rights on server A cannot exploit this*" In other words, you have a problem with humans, not technology or software or workflow. Sudo should be granted only to trustworthy people, and a supervisor should be, well, supervising their work. Like most human problems, software-based solutions are unlikely to be effective for long against a determined inside threat.
David Foerster avatar
us flag
Whenever you need to hide something from or forbid an action to the `root` user account you need to rethink your user access control rules. In most cases, you need to set up a different user account with restricted access to exactly those privileged actions that the user needs to have.
spuck avatar
cn flag
What "security breach" are you concerned with? Users being able to send unintended files? If you don't trust users of server A on server B (and vice-versa), how does that improve by adding a server C that trusts (and is trusted by) both A and B?
spuck avatar
cn flag
Do not give unrestricted sudo rights to untrusted users.
Score:19
cn flag

You can't hide anything on the system from someone who have root access. Even you are using encrypted home directory while you are logged-in it is decrypted and the root user can access the data.

Probably the most simple way to accomplish this task is to setup a third instance that is able to log-in via SSH to both Server A and Server B. Then you can use the scp command (on that third instance) to copy the file from A to B in the following way.

scp -3 serverA:/path/to/the/file serverB:/path/to/store/
  • The hosts serverA and serverB are configured in ~/.ssh/config at the third instance.

Note the option -3, it cause the third instance to operate as intermediate server. In case this option is not presented Server A will be instructed to connect to Server B, but it will need credentials. This option disables the progress meter.

The long version of the answer is available at the history.

Score:5
ar flag

Use rsync in a cronjob, as usual.

On the rsync client, users with "sudo" privilege will be able to see the username and password needed to access the specific folders shared by the rsync server.

But these ceredentials are separate from user credentials, and will not give SSH access to the server. So if things are correctly configured, all they will get access to on the server is the backups from their own machine.

On the rsync server, there is nothing which would give access to the rsync client.

Score:4
cn flag

Just send the file to serverA instead of having a user on serverA copy it from serverB. If you want to copy foo.txt from serverB to serverA, you can do it in two basic ways:

  1. Run a command on serverA to bring the file from serverB.
  2. Run a command on serverB to send the file to serverA.

In the second case, nobody on serverA needs access to serverB. So, set up a key-based ssh from serverB to serverA, and then add your cron line to the relevant user on serverB.

For example, to do this manually, you would do:

user1@serverB $ scp /path/to/foo.txt user1@serverA:/path/to/foo.txt

This way, nobody on serverA gets any access to serverB and you only need to have a user on serverB who can log in to serverA.

Score:2
id flag

A common way to do this is to use sftp and chroot.

On server B you would configure ssh to chroot the user from server A and restrict to sftp. e.g. add to /etc/ssh/sshd_config

Subsystem sftp /usr/lib/openssh/sftp-server

Match Group backups
    ChrootDirectory /backups/%u
    AuthorizedKeysFile %h/.ssh/authorized_keys
    ForceCommand internal-sftp
    AllowTcpForwarding no

You then create a user for server A on server B, e.g. userA, with a directory /backups/userA (owned by root). You would then need a backups group, which userA is added to. The rule in the ssh config is applied to users in this group and this restricts the user from A to only the data that they put in this sftp 'dropbox' and doesn't allow them to ssh.

For backups I'd recommend this approach, combined with a tool like duply for actually managing the backups, so that the backups can be encrypted, incremental, easily restored, etc.

Score:1
cn flag

ssh and rsync (and SFTP) will always give you a login on the server you're connecting to that you may or may not be able to control.

Why not use HTTPS?

You may either

  1. run a server on A and have B GET the data
  2. or run it on B and have A POST it.

To protect the data from unrelated third parties, you'll have to encrypt the connection (use HTTPS, not plain HTTP) and use at least basic auth.

This will not protect the data from the root users on either of the two systems: both may read the data and modify it on their respective side and there's literally nothing you can do about that.

A simple HTTP server can be started in a number of ways, eg:

python -m SimpleHTTPServer 8080

Encrypting the connection requires creating an SSL certificate and a few lines of Python code:

#!/usr/bin/python
import BaseHTTPServer, SimpleHTTPServer
import ssl

httpd = BaseHTTPServer.HTTPServer(('0.0.0.0', 8443), SimpleHTTPServer.SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket(httpd.socket, certfile='./certs_and_key.pem', server_side=True)
httpd.serve_forever()

Basic auth requires a little more code, but that can also be done.

Score:1
th flag

There is a problem with the assumptions behind your question. SSH is designed to give access to a system, yet you want to use SSH and not give access to the system. Maybe SSH isn't the right tool for these requirements?

Option 1

File transfers happen to websites all the time without the uploader gaining access to the underlying system. Create or find a simple web app that can receive a file upload and write it to a desired directory. Require an app secret or password if needed. Any root user on System A will be able to see a file that stores an app secret, so they would be able to upload files in the same way. But as long as the web app that receives the upload doesn't give read access of any sort, the requirement to keep the root user off of System B is met.

Option 2

You could still use SSH to send the files. Just limit permissions. For the System B user that is used for the SCP transfer, only give write access to a specific directory and read permission nowhere on System B. You can send file transfer commands from System A to write to that directory, you just won't ever be able to read them. Again, a root user on System A will have the ability to access the SSH keys and upload their own files to the same place, but if the System B user had no read permissions on the filesystem, what they are able to do is limited. This technique may be easier said than done. The System A root user could still get a shell on System B, so unless you limit permissions across the whole system, including the system executable commands, the user may still be able to run a bunch of commands without reading data files. So it depends on what you mean by somebody having access to the system. Again, not using SSH is probably the better solution here.

Score:0
bl flag
cEz

Whilst not ssh, you might find that syncthing is of interest:

It is also available via apt, along with related packages:

$ apt search -q syncthing
Sorting...
Full Text Search...
golang-github-syncthing-notify-dev/focal,focal 0.0~git20180806.b76b458-1 all
  File system event notification library on steroids

golang-github-syncthing-syncthing-dev/focal,focal 1.1.4~ds1-4ubuntu1 all
  decentralized file synchronization - dev package

syncthing/focal 1.1.4~ds1-4ubuntu1 amd64
  decentralized file synchronization

syncthing-discosrv/focal 1.1.4~ds1-4ubuntu1 amd64
  decentralized file synchronization - discovery server

syncthing-gtk/focal,focal 0.9.4.4-1 all
  GTK3-based GUI and notification area icon for syncthing

syncthing-relaysrv/focal 1.1.4~ds1-4ubuntu1 amd64
  decentralized file synchronization - relay server

For SSH, depending upon the frequency of interval and as such the annoyance factor, you could implement Duo authentication via pam, which would mean that approval would need to take place for connections.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.