Score:0

NGINX opensource: ensure communication encrypted between balancer and nodes

tr flag

I'm setting up a load balancer that has to communicate with the other nodes using TLS. This is important since back-end nodes are not in a private network. The configuration is the one below.

The result is that Nginx returns 502 bad gateway, and Nginx seems not able to redirect to my domains. Furthermore, since I'm using open source version, I cannot use the resolve keyword inside upstream configuration. How can I change this configuration to have Nginx encrypt data between example.com -> backendX.example.com?

NOTICE: if I use IPs instead of URLs into the upstream block the load balancing works, but I don't think it is encrypted

ERROR:

*3 upstream SSL certificate verify error: (2:unable to get issuer certificate) while SSL handshaking to upstream, client: 0.0.0.0, server: lb.example.com

RESULT of openssl s_client -connect backend1.example.com:

Certificate chain
 0 s:CN = backend1.example.com
   i:C = US, O = Let's Encrypt, CN = R3
 1 s:C = US, O = Let's Encrypt, CN = R3
   i:C = US, O = Internet Security Research Group, CN = ISRG Root X1
 2 s:C = US, O = Internet Security Research Group, CN = ISRG Root X1
   i:O = Digital Signature Trust Co., CN = DST Root CA X3
upstream example.com{
   least_conn;
   server backend1.example.com;
   server backend2.example.com;
}

server {

        listen [::]:443 ssl ipv6only=on;
        listen 443 ssl;
        server_name lb.example.com;

        location / {
                proxy_pass https://example.com;

                proxy_ssl_trusted_certificate /etc/letsencrypt/.../chain.pem;
                proxy_ssl_session_reuse on;
                proxy_ssl_verify       on;
                proxy_ssl_verify_depth 2;
                proxy_set_header Host $host;
        }
    ssl_certificate /etc/letsencrypt/.../fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/.../privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

#### NGINX -T
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
    worker_connections 768;
    # multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    # server_tokens off;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;
    resolver 8.8.8.8 8.8.4.4 valid=30s;
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # SSL Settings
    ##

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
    ssl_prefer_server_ciphers on;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip on;

    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

upstream example.com{
   least_conn;
   server backend1.example.com;
   server backend2.example.com;
}

server {

        listen [::]:443 ssl ipv6only=on;
        listen 443 ssl;
        server_name lb.example.com;

        location / {
                proxy_pass https://example.com;

                proxy_ssl_trusted_certificate /etc/letsencrypt/.../chain.pem;
                proxy_ssl_session_reuse on;
                proxy_ssl_verify       on;
                proxy_ssl_verify_depth 2;
                proxy_set_header Host $host;
        }
    ssl_certificate /etc/letsencrypt/.../fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/.../privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

server {
    if ($host = lb.example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    listen 80 default_server;
    listen [::]:80 default_server;

    server_name lb.example.com;
    return 404; # managed by Certbot


}

# configuration file /etc/letsencrypt/options-ssl-nginx.conf:
# This file contains important security parameters. If you modify this file
# manually, Certbot will be unable to automatically provide future security
# updates. Instead, Certbot will print and log an error message with a path to
# the up-to-date file that you will need to refer to when manually updating
# this file.

ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_timeout 1440m;
ssl_session_tickets off;

ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;

ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA";

nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

djdomi avatar
za flag
why do you use https and a port for the upstreams? a port is only needed in case you have none standard ports used
AndreaCostanzo1 avatar
tr flag
@djdomi I also tried without specifying ports. It was a trial I made to ensure that wasn't the problem
djdomi avatar
za flag
both backends can be resolved?
AndreaCostanzo1 avatar
tr flag
@djdomi Yes, they are both reachable with normal browsing using their url
AndreaCostanzo1 avatar
tr flag
@djdomi I fixed the example. Anyway, the domain used by my DNS and the domain used by the other nodes are all different
djdomi avatar
za flag
please share the output of nginx -t and afterwards nginx -T of each other server because IMHO the ssl cert part looks strange [look here as example](https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-load-balancing-with-ssl-termination)
AndreaCostanzo1 avatar
tr flag
@djdomi Added below. The other back-end nodes are pre-existing servers on apache that we used for a lot of time. I was just trying to use nginx as load balancer to put it in between. If I remove URLs and use IPs in upstream everything works fine, but the problem of using IPs is that I don't know wether the communication is encrypted
Michael Hampton avatar
cz flag
You need to have specified a `resolver` but I don't see one anywhere. According to the [docs](https://nginx.org/r/resolver), these must go in `http`, `server` or `location` blocks.
AndreaCostanzo1 avatar
tr flag
There is (in my http settings)! But it still doesn't work
Michael Hampton avatar
cz flag
OK, I see it now. I looked for your error_log entries but I can't find them in your post. Please try making another request and then post the new error_log entries.
AndreaCostanzo1 avatar
tr flag
@MichaelHampton found out the issue but not how to solve it: during SSL handshakes I'm not sending CA trusted certificate. How can I fix it? ERROR: *3 upstream SSL certificate verify error: (2:unable to get issuer certificate) while SSL handshaking to upstream, client: 0.0.0.0, server: lb.example.com,
AndreaCostanzo1 avatar
tr flag
@MichaelHampton I have changed trusted CA cert with /etc/ssl/certs/ca-certificates.crt; and now the error is *1 upstream SSL certificate does not match "upstream-name" while SSL handshaking to upstream
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.