Score:0

nginx windows load balancing very slow for single request

au flag

I have been using nginx with load balancing for request forwarding purpose, each incoming request will be forwarded to ip/domain defined in upstream, below is my configuration :

    worker_processes  1;

events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream; 
    log_format post_logs escape=none '[$time_local] $realip_remote_addr "$request" $status '  

                        '$body_bytes_sent "$http_referer" '        

                     ' \n[$request_body]';

    access_log  logs/post.log  post_logs;   
    upstream backend
    {
        server 1.1.1.1:8080;
        server 2.2.2.2:8080;    
    }   
    sendfile        on;       
    keepalive_timeout  65;
    
     server {
        listen       8005 ssl;
        listen       localhost:8005 ssl;        
        server_name  localhost;
        more_set_headers 'Server: ABC';
        
        ssl_protocols TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers "EECDH+AESGCM,EDH+AESGCM";      
        ssl_certificate /logs/ssl/ssl.crt; 
        ssl_certificate_key /logs/ssl/key.key; 
            
        client_body_buffer_size  1k;
        client_header_buffer_size 1k;
        client_max_body_size 1m; #handles request size
        large_client_header_buffers 2 1k;

            
        location /wapp/ {
        
            autoindex off;          
            limit_except POST {
               deny all;
            }           
            proxy_pass      https://backend/abc/xyz;            
            proxy_read_timeout 300;         
            proxy_pass_header Server;
            proxy_set_header          Host            $host;
            proxy_set_header          X-Real-IP       $remote_addr;
            proxy_set_header          X-Forwarded-For $proxy_add_x_forwarded_for;
            add_header  X-Frame-Options "SAMEORIGIN" always;
            more_set_headers 'Server: ABC';         
        }       
    }   
}

currently there is no other traffic on this configuration i am only sending single request, When i am directly sending traffic on 1.1.1.1:8080 or 2.2.2.2:8080 with

proxy_pass      https://1.1.1.1:8080/abc/xyz;

or

proxy_pass      https://2.2.2.2:8080/abc/xyz;

i am getting response within second, but when i am trying with upstream configuration, i am getting response at around 2 minutes, check postman screen shot below for same

enter image description here

why with upstream configuration it is taking more than enough time for single request only ? can any one help ?

adding upstream logs here :

client=::1 
method=POST 
request="POST /wapp/ HTTP/1.1" 
request_length=1410 status=200 
bytes_sent=5289 
body_bytes_sent=5128 
referer= user_agent="PostmanRuntime/7.29.2" 
upstream_addr=1.1.1.1:8080
upstream_status=200 
request_time=107.998 
upstream_response_time=107.998 
upstream_connect_time=0.036 
upstream_header_time=107.997
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.