Score:0

Getting timeout errors with nginx+gunicorn application on Azure app services

id flag

Guys I need some help with my NGINX config. Right now the Django application is hosted at Azure App services, and going straight to Gunicorn works fine, but when I go trough NGINX I start getting errors, like this:

I've tried increasing the timeout time, but the errors keep coming sporadicly on different endpoints, and the endpoints work fine when I'm not using NGINX, but just using gunicorn, so I guess it has to do something with the NGINX set-up.


    2022-01-26T10:22:03.479463450Z nginx      | 2022/01/26 10:22:03 [info] 29#29: *2245 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream, client: 169.254.130.1, server: xxxxx.com, request: "GET /api/v1/office-hours/ HTTP/1.1", upstream: "http://127.0.0.1:8000/api/v1/office-hours/", host: "xxxxx.com", referrer: "https://xxxx.vercel.app/"
    2022-01-26T10:22:03.514362267Z nginx      | 169.254.130.1 - - [26/Jan/2022:10:22:03 +0000] "GET /api/v1/office-hours/ HTTP/1.1" 499 0 "https://xxxx.vercel.app/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36"
     
    2022-01-26T10:23:03.513182059Z nginx      | 2022/01/26 10:23:02 [info] 29#29: *2247 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream, client: 169.254.130.1, server: xxxxx.com, request: "GET /api/v1/office-hours/ HTTP/1.1", upstream: "http://127.0.0.1:8000/api/v1/office-hours/", host: "xxxxx.com", referrer: "https://xxxx.vercel.app/"
    2022-01-26T10:23:03.513238060Z nginx      | 169.254.130.1 - - [26/Jan/2022:10:23:02 +0000] "GET /api/v1/office-hours/ HTTP/1.1" 499 0 "https://xxxx.vercel.app/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36" 

web        | [2022-01-25 17:21:07 +0000] [15] [CRITICAL] WORKER TIMEOUT (pid:32)
2022-01-25T17:21:07.490138324Z web        | 2022-01-25 18:21:07,487 [32mINFO   [31mglogging   [33mWorker exiting (pid: 32)   [34mgunicorn.error.info:264[0m
2022-01-25T17:21:07.490220225Z nginx      | 2022/01/25 17:21:07 [error] 25#25: *930 upstream prematurely closed connection while reading response header from upstream, client: 169.254.130.1, server: xxxx.com, request: "GET /api/v1/cms/content/ HTTP/1.1", upstream: "http://127.0.0.1:8000/api/v1/cms/content/", host: "xxxx.com"
2022-01-25T17:21:07.490875232Z nginx      | 169.254.130.1 - - [25/Jan/2022:17:21:07 +0000] "GET /api/v1/cms/content/ HTTP/1.1" 502 158 "-" "axios/0.18.1"
2022-01-25T17:21:07.490892833Z nginx      | 2022/01/25 17:21:07 [info] 25#25: *930 client 169.254.130.1 closed keepalive connection
2022-01-25T17:21:08.673367528Z web        | [2022-01-25 17:21:08 +0000] [15] [WARNING] Worker with pid 32 was terminated due to signal 9
2022-01-25T17:21:08.679932505Z web        | [2022-01-25 17:21:08 +0000] [43] [INFO] Booting worker with pid: 43

this is my nginx.conf

#user  nginx;
worker_processes  2; # Set to number of CPU cores, 2 cores under Azure plan P1v3

error_log  /var/log/nginx/error.log debug;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

this is my default.conf

server {
listen 80 default_server;

error_log /dev/stdout info;
access_log /dev/stdout;

client_max_body_size 100M;


location /static {
    root /var/app/ui/build;
}

location /site-static {
    root /var;
}

location /media {
    root /var;
}

location / {
    root /var/app/ui/build; # try react build directory first, if file doesn't exist, route requests to django app
    try_files $uri $uri/index.html $uri.html @app;
}

location @app {
    proxy_set_header        Host $host;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header        X-Forwarded-Proto "https"; # assumes https already terminated by the load balancer in front of us

    proxy_pass          http://127.0.0.1:8000;
    proxy_read_timeout  300;
    proxy_buffering    off;
}

}

Score:1
us flag

You need to debug your application, why it is taking so long to respond.

There is one minute delay between nginx sending the upstream request and nginx giving up waiting for a response.

nginx can connect to the upstream application via TCP, it can send the HTTP request. However, the application doesn't send the response before nginx times out.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.