Score:0

Apache2 response times abysmal after enabling php-fpm module

cn flag

I'm running Apache2 on a Debian 11 VPS. I've written an API, and I'm trying to stress test it via an external server using ApacheBench.

With mpm_prefork enabled and php8.0-fpm disabled, the 95th percentile response times are around 30ms. The output is as follows:

Server Software:        Apache/2.4.52
Server Hostname:        XXX.XXX.com
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,4096,256
Server Temp Key:        X25519 253 bits
TLS Server Name:        XXX.XXX.com

Document Path:          /v1/module
Document Length:        370 bytes

Concurrency Level:      100
Time taken for tests:   2.783 seconds
Complete requests:      1000
Failed requests:        0
Keep-Alive requests:    1000
Total transferred:      845001 bytes
HTML transferred:       370000 bytes
Requests per second:    359.28 [#/sec] (mean)
Time per request:       278.336 [ms] (mean)
Time per request:       2.783 [ms] (mean, across all concurrent requests)
Transfer rate:          296.48 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   14 157.2      0    2295
Processing:    17   26   3.3     25      44
Waiting:       17   26   3.3     25      44
Total:         17   40 157.8     25    2330

Percentage of the requests served within a certain time (ms)
  50%     25
  66%     26
  75%     27
  80%     28
  90%     30
  95%     33
  98%     41
  99%    282
 100%   2330 (longest request)

With mpm_prefork still enabled but switching over to php8.0-fpm, the response times become ridiculous. The output is as follows:

Server Software:        Apache/2.4.52
Server Hostname:        XXX.XXX.com
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,4096,256
Server Temp Key:        X25519 253 bits
TLS Server Name:        XXX.XXX.com

Document Path:          /v1/module
Document Length:        370 bytes

Concurrency Level:      100
Time taken for tests:   12.595 seconds
Complete requests:      1000
Failed requests:        0
Keep-Alive requests:    0
Total transferred:      788000 bytes
HTML transferred:       370000 bytes
Requests per second:    79.39 [#/sec] (mean)
Time per request:       1259.549 [ms] (mean)
Time per request:       12.595 [ms] (mean, across all concurrent requests)
Transfer rate:          61.10 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       91 1152 619.3   1108    4744
Processing:    22   46  39.0     32     850
Waiting:       21   43  27.4     31     491
Total:        121 1198 623.4   1146    4784

Percentage of the requests served within a certain time (ms)
  50%   1146
  66%   1298
  75%   1668
  80%   1796
  90%   1992
  95%   2053
  98%   2190
  99%   3017
 100%   4784 (longest request)

I'm running Apache/2.4.52 (Debian) and PHP 8.0.14 (cli). The goal is to start using mpm_event instead of mpm_prefork, to enable a lot of concurrent connections for my API. I can't start doing this with such load times though.

How can I switch to php-fpm, without ruining my load times? My php-fpm settings are as follows:

pm.max_children = 844
pm.start_servers = 16
pm.min_spare_servers =  8
pm.max_spare_servers =  16
pm.max_requests = 1000
jp flag
PHP running with `mod_php` and with `php-fpm` use different configs. Make sure that the settings are the same in both cases.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.