Score:0

Tune performance of Ubuntu 20 for running Nginx inside a docker

pn flag

I have a 1gbps server, and I want to squeeze max performance (RPS) from it. The setup is an Ubuntu bare metal server, on it, there is a docker with Nginx in it.

Although the server is supposed to support 1gbps, it looks like the actual performance is really lame (900RPS).

First, I checked for maybe a CPU issue, this is my TOP:

top - 08:26:56 up 5 days, 22:33,  2 users,  load average: 0.40, 0.27, 0.44
Tasks: 271 total,   1 running, 270 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.6 us,  0.4 sy,  0.0 ni, 98.8 id,  0.0 wa,  0.0 hi,  0.2 si,  0.0 st
MiB Mem :  31986.2 total,  12924.5 free,   3720.9 used,  15340.8 buff/cache
MiB Swap:   1024.0 total,   1004.5 free,     19.5 used.  27610.2 avail Mem

As you can see, we are all good there (we utilize too less, not too much of the CPU power available for us).

I want to check Iftop:

TX:             cum:    816MB   peak:   36.8Mb                                                                                                                               rates:   17.3Mb  24.2Mb  26.6Mb RX:                     272MB           13.2Mb                                                                                                                                        7.76Mb  7.92Mb  8.90Mb TOTAL:                 1.06GB           47.3Mb                                                                                                                                        25.1Mb  32.2Mb  35.5Mb


TX:             cum:    288MB   peak:   44.9Mb                                                                                                                               rates:   17.0Mb  18.2Mb  25.0Mb
RX:                     118MB           32.2Mb                                                                                                                                        7.27Mb  7.30Mb  11.8Mb
TOTAL:                  406MB           76.0Mb                                                                                                                                        24.3Mb  25.5Mb  36.8Mb 

It looks like a VERY poor performance.

This is my server HW configuration:

Intel Xeon E-2386G(6c/12t), 3.5 GHz/4.7 GHz
32GB ECC 3200MHz
1TB SSD NVMe

I run Ubuntu Server 20.04 LTS "Focal Fossa".

This is my /etc/sysctl.conf file:

fs.aio-max-nr = 524288  
fs.file-max = 611160  
kernel.msgmax = 131072  
kernel.msgmnb = 131072  
kernel. panic = 15  
kernel.pid_max = 65536  
kernel.printk = 4 4 1 7  
net.core.default_qdisc = fq  
net.core.netdev_max_backlog = 262144  
net.core.optmem_max = 16777216  
net.core.rmem_max = 16777216  
net.core.somaxconn = 65535  
net.core.wmem_max = 16777216  
net.ipv4.conf.all.accept_redirects = 0  
net.ipv4.conf.all.log_martians = 1  
net.ipv4.conf.all.rp_filter = 1  
net.ipv4.conf.all.secure_redirects = 0  
net.ipv4.conf.all.send_redirects = 0  
net.ipv4.conf.default.accept_redirects = 0  
net.ipv4.conf.default.accept_source_route = 0  
net.ipv4.conf.default.rp_filter = 1  
net.ipv4.conf.default.secure_redirects = 0  
net.ipv4.conf.default.send_redirects = 0  
net.ipv4.ip_forward = 0  
net.ipv4.ip_local_port_range = 1024 65535  
net.ipv4.tcp_congestion_control = bbr  
net.ipv4.tcp_fin_timeout = 10  
net.ipv4.tcp_keepalive_intvl = 10  
net.ipv4.tcp_keepalive_probes = 5  
net.ipv4.tcp_keepalive_time = 60  
net.ipv4.tcp_low_latency = 1  
net.ipv4.tcp_max_orphans = 10000  
net.ipv4.tcp_max_syn_backlog = 65000  
net.ipv4.tcp_max_tw_buckets = 1440000  
net.ipv4.tcp_moderate_rcvbuf = 1  
net.ipv4.tcp_no_metrics_save = 1  
net.ipv4.tcp_notsent_lowat = 16384  
net.ipv4.tcp_rfc1337 = 1  
net.ipv4.tcp_rmem = 4096 87380 16777216  
net.ipv4.tcp_sack = 0  
net.ipv4.tcp_slow_start_after_idle = 0  
net.ipv4.tcp_synack_retries = 2  
net.ipv4.tcp_syncookies = 1  
net.ipv4.tcp_syn_retries = 2  
net.ipv4.tcp_timestamps = 0  
net.ipv4.tcp_tw_reuse = 1  
net.ipv4.tcp_window_scaling = 0  
net.ipv4.tcp_wmem = 4096 65536 16777216  
net.ipv6.conf.all.disable_ipv6 = 1  
net.ipv6.conf.default.disable_ipv6 = 1  
net.ipv6.conf.lo.disable_ipv6 = 1  
vm.dirty_background_ratio = 2  
vm.dirty_ratio = 60  
vm.max_map_count = 262144  
vm.overcommit_memory = 1  
vm.swappiness = 1

This is my Nginx conf file:

worker_processes auto;
worker_rlimit_nofile 100000;

events {
    worker_connections 4000;
    use epoll;
    multi_accept on;
}



http {

        error_log /var/errors/externalNginx.http.error_1 error;
  lua_shared_dict auto_ssl 1m;
  lua_shared_dict auto_ssl_settings 64k;
  resolver 127.0.0.11;

  # Initial setup tasks.
  init_by_lua_block {
    auto_ssl = (require "resty.auto-ssl").new()

    -- Define a function to determine which SNI domains to automatically handle
    -- and register new certificates for. Defaults to not allowing any domains,
    -- so this must be configured.
    auto_ssl:set("allow_domain", function(domain)
      return true
    end)

    auto_ssl:init()
  }

  init_worker_by_lua_block {
    auto_ssl:init_worker()
  }
        limit_req_log_level    warn;
        limit_req_zone         $binary_remote_addr zone=video:10m rate=30r/s;  # rate=10r/m; #11 Change the rate limit to be 30r per second for a spesific adress - didn't make the change, the issue is not here?
    open_file_cache max=200000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;
    sendfile on;
    tcp_nodelay on;
    tcp_nopush on;
    reset_timedout_connection on;
    client_body_timeout 5s;
    client_header_timeout 5s;
    send_timeout 2;
    keepalive_timeout 30;
    keepalive_requests 100000;



  # internal server
  # HTTPS cdn api server
  server {
        listen 443 ssl http2;
        server_name my.server.com;
        error_log /var/errors/my.server.com error;

        # Dynamic handler for issuing or returning certs for SNI domains.
        ssl_certificate_by_lua_block {
        auto_ssl:ssl_certificate()
        }

        ssl_certificate /etc/resty-default-ssl/resty-auto-ssl-fallback-secondery.crt;
        ssl_certificate_key /etc/resty-default-ssl/resty-auto-ssl-fallback-secondery.key;


        location /HealthCheck {
                return 200 "Hello world!";
        }

  }


  # HTTP server
  server {
    listen 80;

    location /HealthCheck {
        return 200;
    }

    # Endpoint used for performing domain verification with Let's Encrypt.
    location /.well-known/acme-challenge/ {
      content_by_lua_block {
        auto_ssl:challenge_server()
      }
    }
  }

  # Internal server running on port 8999 for handling certificate tasks.
  server {
    listen 127.0.0.1:8999;

    # Increase the body buffer size, to ensure the internal POSTs can always
    # parse the full POST contents into memory.
    client_body_buffer_size 128k;
    client_max_body_size 128k;

    location / {
      content_by_lua_block {
        auto_ssl:hook_server()
      }
    }
  }
}

This is my docker-compose file:

version: '3.4'

services:
  externalnginx:
    depends_on:
      - cdnnginx
    container_name: externalnginx
    hostname: externalnginx
    image: externalnginx:2.0
    ports:
      - 80:80
      - 443:443
    volumes:
      - type: bind
        source: ./externalNginx.conf
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: '2'  # This is 2/12 of my CPUs (actually use much less than this for some reason), test this with 12 as well. not much help

All of this made me think, that maybe, this not working well because I run the Nginx inside a docker and the tuning I have made was for a Nginx that normally runs as a process inside the server, not sure what is the difference should be in the configuration to extract maximum RPS, can anyone here help with this?

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.