We have our own rack in Amsterdam, Leaseweb.
We do HTTP load balance (via Cloudflare) with 3 Windows 2019 IIS servers:
- Server 1: Bare-metal supermicro server. Runs IIS, MySQL8, and Redis.
- Server 2: VM on Dell server. Runs IIS.
- Server 3: VM on Dell server (exact copy of server2). Runs IIS.
The files are locally served in all causes (via replication)
Now the problem is that the TTFB, as measured locally on the server is higher on server 2 and server 3 (VMs).
Running (multiple) tests LOCALLY with chrome:
Server 1:
- Waiting (TTFB): 269ms
- Waiting (TTFB): 255ms
- Waiting (TTFB): 253ms
Server 2:
- Waiting (TTFB): 379ms
- Waiting (TTFB): 376ms
- Waiting (TTFB): 369ms
Server 3:
- Waiting (TTFB): 374ms
- Waiting (TTFB): 381ms
- Waiting (TTFB): 378ms
As you can see, server one has significantly lower TTFB.
In terms of CPU, servers 2 and 3 are actually faster:
PHP BENCHMARK SCRIPT
Server1
Total time: : 4.022 sec.
Server2
Total time: : 2.866 sec.
Server3
Total time: : 2.936 sec.
The I/O is about the same for all servers. They all have new SSDs with hardware raid controllers.
I did test moving Redis to one of the VMs so I could figure out if the extra latency comes from Redis, but it does not make a bit of difference.
My assumption is that the extra latency in TTFB comes from MySQL that runs on server 1? Running the MySQL on the same server produces significantly smaller TTFB even though the CPU is slower.
Is there a workaround to this?
Actually, the proper question is, how can I identify what the cause of extra latency is?