Score:0

Nextcloud Web App hosted on Nginx has incredible slow TLS handshake

jp flag

I’m self-hosting a Nextcloud instance. I updated this for years and was always very happy with it. I don’t use docker but hosting bare metal on a Debian 11 Bullseye system. For SSL, I use Let’s Encrypt, Webserver is NGINX. Hardware is quite good, 16 GB RAM, Xeon Dual Core, SSD.

I found out, that always the first connection attempt is veryyy slow. Afterwards, things getting better. But after a few minutes of waiting, it’s slow again.

I could reproduce this behavior with curl

$ curl -v https://cloud.example.org
*   Trying 2001:....:443...
*   Trying 192.168.170.11:443...
* Connected to cloud.example.org (192.168.170.11) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=cloud.example.org
*  start date: Nov 26 16:41:45 2022 GMT
*  expire date: Feb 24 16:41:44 2023 GMT
*  subjectAltName: host "cloud.example.org" matched cert's "cloud.example.org"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* Using Stream ID: 1 (easy handle 0x55f65608ee80)
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
> GET / HTTP/2
> Host: cloud.example.org
> user-agent: curl/7.81.0
> accept: */*
> 
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
* TLSv1.2 (OUT), TLS header, Supplemental data (23):

LONG WAITING TIME HERE (40+ seconds)

* TLSv1.2 (IN), TLS header, Supplemental data (23):
< HTTP/2 302 
< server: nginx
< date: Mon, 09 Jan 2023 15:02:09 GMT
< content-type: text/html; charset=UTF-8
< location: https://cloud.example.org/login
< expires: Thu, 19 Nov 1981 08:52:00 GMT
< cache-control: no-store, no-cache, must-revalidate
< pragma: no-cache
< set-cookie: oc_sessionPassphrase=xxxx; path=/; secure; HttpOnly; SameSite=Lax
< set-cookie: oca19nuvojkz=0a58ikqc0mvt2cchvemee08vd5; path=/; secure; HttpOnly; SameSite=Lax
< content-security-policy: default-src 'self'; script-src 'self' 'nonce-d0dqRGNyd3ZaL1BycTh3SE1tRmw2VnArdnQvU3ZtRTlaeDlrQ0VpYnk4cz06dVYrNkt2bE9Fb0dlenI5Q0F3NVd1ejRjL1l5LytBeGtCRmNKZXcvUHNvZz0='; style-src 'self' 'unsafe-inline'; frame-src *; img-src * data: blob:; font-src 'self' data:; media-src *; connect-src *; object-src 'none'; base-uri 'self';
< set-cookie: __Host-nc_sameSiteCookielax=true; path=/; httponly;secure; expires=Fri, 31-Dec-2100 23:59:59 GMT; SameSite=lax
< set-cookie: __Host-nc_sameSiteCookiestrict=true; path=/; httponly;secure; expires=Fri, 31-Dec-2100 23:59:59 GMT; SameSite=strict
< strict-transport-security: max-age=15768000; includeSubDomains; preload;
< referrer-policy: no-referrer
< x-content-type-options: nosniff
< x-download-options: noopen
< x-frame-options: SAMEORIGIN
< x-permitted-cross-domain-policies: none
< x-robots-tag: none
< x-xss-protection: 1; mode=block
< 
* Connection #0 to host cloud.example.org left intact

So, as you see its waiting for multiple seconds on TLS handshake. What could cause this?

I’m running the latest version of Nextcloud, 24, but this problem exists for 4 or six months now. I got no response on Nextcloud forums but I consider this as Nginx / TLS problem.

Nikita Kipriyanov avatar
za flag
Does it have enough entropy?
jp flag
Try `tcpdump` and check packets timestamps.
Powerriegel avatar
jp flag
I hoped there would be an easier answer than dumping the traffic. Never worked with dumps before.
Powerriegel avatar
jp flag
Entropy should be sufficient `cat /proc/sys/kernel/random/entropy_avail 256`
Score:0
jp flag

I finally found the reason, in the NGINX error log. There were many errors regarding ssl and ciphers. It seems I restricted ciphers too much so clients needed a lo of time for the handshake because most of my servers ciphers was not accepted by clients.

Its possible that this happened due to an NGINX update, too.

Used Mozillas SSL Configuration Generator to update my config. Now, its quite fast.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.