Score:1

NGINX TCP Load Balancing is ip-sticky when it should be random, per request

my flag
A X

I have an NGINX server being used as a TCP load balancer. It is default to round-robin load balancing, so my expectation is that for a given client IP, every time they hit the endpoint they will get a different backend upstream server for each request. But instead what is happening is that they get the same upstream server every time, and each distinct client IP is getting a distinct upstream server. This is bad because my clients generate a lot of traffic and it is causing hotspots because any given client can only utilize one upstream server. It seems to slowly rotate a given client IP across the upstream servers; again I want it to randomly assign each request to an upstream per request.

How can I make NGINX randomely assign the upstream server for every request? I tried the random keyword and this had no effect. Any help would be greatly appreciated.

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

stream {

    upstream api_backend_http {
        server node1.mydomain.com:80;
        server node2.mydomain.com:80;
        server node6.mydomain.com:80;
        server node14.mydomain.com:80;
        server node18.mydomain.com:80;
        server node19.mydomain.com:80;
        server node21.mydomain.com:80;
        server node22.mydomain.com:80;
        server node24.mydomain.com:80;
    }

    upstream api_backend_https {
        server node1.mydomain.com:443;
        server node2.mydomain.com:443;
        server node6.mydomain.com:443;
        server node14.mydomain.com:443;
        server node18.mydomain.com:443;
        server node19.mydomain.com:443;
        server node21.mydomain.com:443;
        server node22.mydomain.com:443;
        server node24.mydomain.com:443;
    }

    server {
        listen            80;
        proxy_pass        api_backend_http;
        proxy_buffer_size 16k;
        proxy_connect_timeout 1s;
    }

    server {
        listen            443;
        proxy_pass        api_backend_https;
        proxy_buffer_size 16k;
        proxy_connect_timeout 1s;
    }

    
}
Score:-1
za flag

Because you should stop using nginx as a TCP load balancer for other Web-servers and switch it to a full-fledged HTTP reverse-proxy, which it is. That way you will get the per-request RR, which you want (with persistent connections disabled by default), instead of TCP session distribution.

A X avatar
my flag
A X
But isn't it supposed to distribute the requests in TCP mode? @drookie
drookie avatar
za flag
It should and it does. Just not the way you want, because you're using it wrong.
A X avatar
my flag
A X
This answer is totally unhelpful and vague because it does not actually answer the question. The question is how to fix the issue.
drookie avatar
za flag
You won’t get any other, because the original question was a perverted example of setting up things. And that’s the educational part.
A X avatar
my flag
A X
You win the award for "worst answer on Server Fault"
drookie avatar
za flag
Too bad you have to earn infinite amount or reputation to even propose something like this. With your knowledge and attitude it will take... let me think... FOREVER.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.