Score:0

Fully transparent reverse proxy

mx flag

I'm trying to set up the following:

┌──────────────────┐            ┌────────────────────┐           ┌─────────┐    
│                  │            │                    │           │         │    
│      Router      │            │                    │           │Server 1 │    
│       NAT        │Port forward│                    │           │         │    
│                  │ ────────►  │     Server 0       │           │HTTP >   │    
│                  │            │                    │           │HTTPS    │    
│                  │            │    1.example.com  ───────────► │redirect │    
│                  │            │    2.example.com  ────┐        └─────────┘    
└──────────────────┘            └────────────────────┘  │         192.168.178.8 
                                     192.168.178.4      │                       
                                                        │   ┌─────────┐         
                                                        │   │         │         
                                                        │   │         │         
                                                        │   │Server 2 │         
                                                        └─► │         │         
                                                            │HTTP only│         
                                                            │         │         
                                                            └─────────┘         
                                                            192.168.178.7       

I want server 0 to act as a fully transparent proxy that only forwards the traffic. So that the clients don't establish TLS connection with server 0, but directly with server 1/2 and the HTTP-01 challenge based automated certificate generation and renewal on server 1/2 still works.

Bravo avatar
us flag
not sure, but would [this](https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#transports) help at all?
in flag
That doesn't work. Server0 needs at least to do SNI to decide which server should handle the request. You need to terminate SSL at server0. (Or be able to decrypt the packets to forward them properly)
djdomi avatar
za flag
you may use squid and define 2 routes for that hosts
us flag
You can use nginx on Server0 with the stream TCP proxy module. nginx can parse the TLS SNI field from stream and pass the stream to a server based on SNI. https://serverfault.com/questions/1023756/nginx-stream-map-with-wildcard contains a configuration you can start with.
Orphans avatar
cn flag
Why not run a TCP-loadbalancer?'
Trigus avatar
mx flag
@Orphans At the time I wanted to Nginx, but I eventually switched to HAProxy
Score:1
mx flag

Edit: If you are concerned about the connection between your reverse proxy (that terminates the SSL tunnel) and the content server being unsecured, although this does work and is secure, you might be better off configuring upstream SSL or a secure tunnel like SSH or IPSEC between the content server and your reverse proxy.


I got it working:

File structure:

ngnix/
    config/
        nginx.conf
    http_server_name.js
    docker-compose.yml

nginx.conf

load_module modules/ngx_stream_js_module.so;

events {}

stream {
    js_import main from http_server_name.js;
    js_set $preread_server_name main.get_server_name;

    map $preread_server_name $http {
        1.example.com server1_backend_http;
        2.example.com server2_backend_http;
    }

    map $ssl_preread_server_name $https {
        1.example.com server1_backend_https;
        2.example.com server2_backend_https;
    }

    upstream server1_backend_http {
        server 192.168.178.8:80;
    }
    
    upstream server1_backend_https {
        server 192.168.178.8:443;
    }

    upstream server2_backend_http {
        server 192.168.178.7:80;
    }

    server {
        listen 443;  
        ssl_preread on;
        proxy_pass $https;
    }

    server {
        listen 80;
        js_preread main.read_server_name;
        proxy_pass $http;
    }
}

docker-compose.yml

version: '3'

services:
  ngnix:
    image: nginx
    container_name: ngnix
    restart: unless-stopped
    volumes:
      - ./config/ngnix.conf:/etc/nginx/nginx.conf:ro
      - ./config/http_server_name.js:/etc/nginx/http_server_name.js:ro
    ports:
      - "192.168.178.4:80:80"
      - "192.168.178.4:443:443"

http_server_name.js

var server_name = '-';

/**
 * Read the server name from the HTTP stream.
 *
 * @param s
 *   Stream.
 */
function read_server_name(s) {
  s.on('upload', function (data, flags) {
    if (data.length || flags.last) {
      s.done();
    }

    // If we can find the Host header.
    var n = data.indexOf('\r\nHost: ');
    if (n != -1) {
      // Determine the start of the Host header value and of the next header.
      var start_host = n + 8;
      var next_header = data.indexOf('\r\n', start_host);

      // Extract the Host header value.
      server_name = data.substr(start_host, next_header - start_host);

      // Remove the port if given.
      var port_start = server_name.indexOf(':');
      if (port_start != -1) {
        server_name = server_name.substr(0, port_start);
      }
    }
  });
}

function get_server_name(s) {
  return server_name;
}

export default {read_server_name, get_server_name}

Documentation:
ngx_http_upstream_module
ngx_http_map_module
ngx_stream_proxy_module

Edit:
Read this blog post for more info

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.