Score:1

nginx slows down my rest-service by 5x

us flag

I have a simple REST service set up that natively (standalone executable) handles 135k/rps. Service is running on localhost:8181

autocannon benchmark running from separate machine yields:

┌───────────┬─────────┬─────────┬─────────┬─────────┬──────────┬──────────┬─────────┐
│ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%   │ Avg      │ Stdev    │ Min     │
├───────────┼─────────┼─────────┼─────────┼─────────┼──────────┼──────────┼─────────┤
│ Req/Sec   │ 58335   │ 58335   │ 109247  │ 135039  │ 106779.2 │ 18509.53 │ 58312   │
├───────────┼─────────┼─────────┼─────────┼─────────┼──────────┼──────────┼─────────┤
│ Bytes/Sec │ 9.74 MB │ 9.74 MB │ 18.3 MB │ 22.5 MB │ 17.8 MB  │ 3.09 MB  │ 9.74 MB │
└───────────┴─────────┴─────────┴─────────┴─────────┴──────────┴──────────┴─────────┘

When proxied with basic upstream setup through nginx the performance drops dramatically:

┌───────────┬─────────┬─────────┬─────────┬─────────┬──────────┬─────────┬─────────┐
│ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%   │ Avg      │ Stdev   │ Min     │
├───────────┼─────────┼─────────┼─────────┼─────────┼──────────┼─────────┼─────────┤
│ Req/Sec   │ 13359   │ 13359   │ 14991   │ 19103   │ 15767.12 │ 1878.98 │ 13352   │
├───────────┼─────────┼─────────┼─────────┼─────────┼──────────┼─────────┼─────────┤
│ Bytes/Sec │ 2.53 MB │ 2.53 MB │ 2.83 MB │ 3.61 MB │ 2.98 MB  │ 355 kB  │ 2.52 MB │
└───────────┴─────────┴─────────┴─────────┴─────────┴──────────┴─────────┴─────────┘

Here are my nginx configurations (i have been experimenting with slightly, which resulted in very small improvements):

nginx.conf

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
    worker_connections 768;
    multi_accept on;
}

http {
    open_file_cache max=200000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    access_log off;
    error_log off;

    sendfile on;
    tcp_nopush on;
    tcp_nodelay off;
    keepalive_timeout 35;
    types_hash_max_size 2048;

    client_max_body_size 100M;


    include /etc/nginx/mime.types;
    default_type application/octet-stream;


    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;


    gzip off;
    gzip_min_length 10240;
    gzip_comp_level 1;
    gzip_vary on;
    gzip_disable msie6;
    gzip_proxied expired no-cache no-store private auth;

    server_tokens off;

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

sites-enabled/reverse-proxy.conf

upstream pindap_api {
    least_conn; 
    server localhost:8181;
}

server {
    listen 80;
    server_name api.pindap;

    access_log off;
    error_log off;

    location / {
        proxy_buffering off;
        proxy_pass http://pindap_api;
    }
}

server {
    listen 80;

    server_name pindap;         

        access_log off;
        error_log off;

        location / {
        proxy_buffering off;
        proxy_pass http://localhost:8181;
        }
}

What could be the cause of this? What else can I try?

djdomi avatar
za flag
https://stackoverflow.com/questions/64862439/nginx-reverse-proxy-low-performance - can maybe help?
djdomi avatar
za flag
I am also missing `proxy_http_version 1.1;` and for reuse connection `proxy_set_header Connection "";` to use keepalive, maybe this can be a bottleneck
enko avatar
us flag
@djdomi thank you for the link and these suggestions. I saw a marginal improvement (~ +5k rps) when adding ```proxy_http_version 1.1;``` Otherwise its the same. I've tried some system settings in the post you mentioned, port range was 32768 to 60999, but with it set 1024 to 65000 - there is no impact. It occurred to me that this might or might not be because this dev server is running ubuntu 20.04 desktop, and there might be some less than obvious kernel/configuration issue. So I will test it on server release to see if this still occurs.
enko avatar
us flag
quick update: I've decided to give another proxy a try just to see if its overall system problem. HAProxy handles 114k rps. Its still slower than native 135k rps, but running the REST service on same machine that is expected. Also I find HAProxy configuration to be a bit easier to work with - so I think I will use it instead, since this project doesn't need static file server and will rely entirely on microservices.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.