Score:0

How to determine nginx/HTTPS encryption is my bottleneck?

tr flag

I'm using a Digital Ocean droplet (like an EC2 host), it's Ubuntu 22.10, and running a node.js application. However, and this may be a bad idea, I'm running nginx on the same host and the application and it's terminating the HTTPS connection. It all works mostly fine, with a caveat.

My most expensive call takes about 622 ms to compute the results but then takes about 6 seconds to send the data. As the slowdown is on sending the response, I suspect the encryption for HTTPS on this anemic droplet is the bottleneck.

The question I have is, how do I gather data on nginx performance to determine if that is the bottleneck? Thank you in advance!

user1686 avatar
fr flag
How much data are you sending? Which cipher are you using for TLS? What CPU load does the system report? The only CPU-intensive part (the key exchange and RSA/EC signatures) has already been done when you established the connection – when the response is being sent, all that's left is AES-GCM or similar, and you can measure AES-GCM throughput using various tools. For a reasonable amount of data I'd be _highly_ surprised if it was the issue.
Score:1
za flag

The speed of the encryption can be tested by the openssl speed tool (see man openssl-speed):

SYNOPSIS

openssl speed [-help] [-engine id] [-elapsed] [-evp algo] [-decrypt] [-rand file...] [-writerand file] [-primes num] [-seconds num] [-bytes num] [algorithm...]

DESCRIPTION

This command is used to test the performance of cryptographic algorithms. To see the list of supported algorithms, use the list --digest-commands or list --cipher-commands command. The global CSPRNG is denoted by the rand algorithm name.

For example, this is run on the Debian 11 desktop with i5-4570 CPU (released 10 years ago):

$ openssl speed aes
Doing aes-128 cbc for 3s on 16 size blocks: 42126692 aes-128 cbc's in 3.00s
...
Doing aes-256 cbc for 3s on 16384 size blocks: 31677 aes-256 cbc's in 3.00s
OpenSSL 1.1.1n  15 Mar 2022
built on: Fri May 26 21:30:44 2023 UTC
options:bn(64,64) rc4(16x,int) des(int) aes(partial) blowfish(ptr) 
compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack -g -O2 -ffile-prefix-map=/build/openssl-FSeIwm/openssl-1.1.1n=. -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes  16384 bytes
aes-128 cbc     224675.69k   234114.05k   234762.07k   236847.45k   236888.06k   237076.48k
aes-192 cbc     194343.19k   199041.24k   199621.38k   200813.57k   200278.02k   196515.16k
aes-256 cbc     167304.09k   171438.21k   172737.84k   172836.18k   172886.70k   172998.66k

Each core is able to encrypt 170 MB/sec with AES-256-CBC without HW acceleration. As it is 2-core 4-thread CPU, I'd also run a 4-thread test (openssl -multi 4 aes), and got 642MB/s overall.

It's possible to test AES-128-GCM with "EVP" interface:

$ openssl speed -multi 4 -evp aes-128-gcm
...
evp            1467308.76k  3688563.14k  7856414.29k 10757336.06k 12390427.31k 12442206.21k

This shows 12 GB/s using AES-NI hardware acceleration.

So even if your server was to run on this computer, encryption wouldn't be the bottleneck.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.