Score:0

Nginx as proxy and very slow client weird issue

lv flag

We have web api with multiple endpoints behind nginx (1.18.0 on Ubuntu 20) proxy. Everything works fine but one scenario. When user whith our company's Android app tries to upload attchment using one specific endpoint to server behind nginx and one's network connectivity is quite poor one's POST requests simply don't reach the server. All other endpoint are reachable with no problems. As admin i can see long tcp stream client->nginx, i see 400 error in nginx log with zero bytes sent to proxied api server. Wireshark shows no POST request nginx->server. With good network speed everything works ok. We tested this scenario with traffic shaper and yes, realy nginx stopped to pass upload POST request to api server. Our config:

upstream my_backend {
    server app.ours.local:8080;   
    keepalive 60;   
}
Server {
    
    listen 443 ssl;     
    server_name our.server;     
    ssl_certificate /etc/nginx/ssl/chain.crt;
    ssl_certificate_key /etc/nginx/ssl/private.key;     
    keepalive_timeout 40;
    access_log  /var/log/nginx/our.access.log upstreamlog;
    error_log /var/log/nginx/our.error.crit.log crit;
    error_log /var/log/nginx/our.error.alert.log alert;
    error_log /var/log/nginx/our.error.emerg.log emerg;
    error_log /var/log/nginx/our.error.log error;
    error_log /var/log/nginx/our.error.warn.log warn;
    
    default_type application/json;
    client_max_body_size 100M;          
    
    proxy_http_version 1.1; 
    proxy_redirect off;
    proxy_buffering on;
    proxy_read_timeout 120s;    
    proxy_pass_header Authorization;
    proxy_set_header Connection "";             
    
    proxy_connect_timeout 30s;
    proxy_send_timeout 30s;         
    client_body_timeout 60s;
    }

location / {            
    return 404 "Not found.";    
}
        
location /MobApp/ODataV4/AddServiceFile {

    proxy_set_header Host my_backend;                   
                    
    rewrite ^/MobApp/ODataV4/ /api/ODataV4/APIManagement_AddMMRequestFile?company=Ours break;                                       
    proxy_pass https://my_backend;
}

We need our nginx proxy server proceeds long request from slow clients the to app server behind nginx like it does for normal speed clients. Now it's simply doesn't pass any packet at all. I see only one message in access logs and zero in error:

[02/Jun/2023:11:32:30 +0300] status - 400 x.x.x.x - our.server to: y.y.y.y:8080: POST /MobApp/ODataV4/AddServiceFile HTTP/1.1 /api/ODataV4/APIManagement_AddMMRequestFile - bytes_sent - 0

We've tried to play with proxy_ buffer and client_buffer settings like timeouts and caching on/off. It doesn't work. Now i have no idea how make nginx proceed such request so any hints, ideas, guesses - please help :-)

djdomi avatar
za flag
Questions seeking installation, configuration or diagnostic help must include the desired end state, the specific problem or error, sufficient information about the configuration and environment to reproduce it, and attempted solutions. Questions without a clear problem statement are not useful to other readers and are unlikely to get good answers.
ng flag
Your question reads like, "why does a slow or poor network connection end up being a slow upload" to be honest. Is that what you are asking here?
Dadudki avatar
lv flag
Thanks for reactions! >"why does a slow or poor network connection end up being a slow upload"... No. I'm asking, why does a slow or poor network connection end up being no upload at all, i see no single package of "heavy" POST request proxied by nginx to app server. Ir looks like nginx simply gives 400 Bad Request and not passing it to server. Realy sorry if my my post wasn't clear enough.
Dadudki avatar
lv flag
@djdomi, hi! Desired state is my nginx proxy server proceeds long request from slow clients the to app server behind nginx like it does for normal speed clients. Now it's simply doesn't pass any packet at all. As for attempted solutions, i've tried to play with proxy_ buffer and client_buffer settings like timeouts and caching on/off. It doesn't work.
Dadudki avatar
lv flag
Envioment includes nginx (1.18.0) proxy service on Ubuntu 20, Android app client behind wi-fi router with traffic shaping that is being used by us a emmulator of slow network "in fields", and web api service behind nginx which registers no desirable POST request at all. Honetsly i had no hope that anybody would reproduce my config in enviroment, the hope was for any not so obvious or well-known configuration params that can impact nginx processing of slow requests. But thanks anyway.
anx avatar
fr flag
anx
Your configuration sample does not show the `proxy_pass` upstream definition. Other than that, you next steps should be to isolate the problem. Can you manage to swap out the application for something simpler, possibly another nignx server block that accepts POST and does nothing with it, and still reproduce your applications behaviour? Can you make a request directly to your upstream application, using the same TLS version, IP version, connection properties & headers?
djdomi avatar
za flag
moreover please edit the question instead of using the commantary section
ws flag
Really? You setup wireshark with TLS sniffing to find out if the request was going to the backend?
Dadudki avatar
lv flag
@symcbean no TLS sniffing, just simple tcpdump.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.