We have deployed OTRS 6.0 community edition on the VMware cloud.
The configuration is as follows: 2 servers, an application server and a database server. Both on Ubuntu 20.4, Apache 2.4 Web server, mpm_prefork; PostgreSQL database 12.9. 24GB RAM on the application server, 4 GB on the database server.
As recommended, we moved the articles from the database to the disk, placed the cache and sessions on the ram disk.
When we launched the load test (~50 real agents), we were faced with the fact that the load on the server processor is growing sharply (there is free memory, it does not go into swap). As a result, it was necessary to increase the number of processor cores on the application server to 16, and on the database server to 4. At the same time, the simulation of the siege for 100 users loaded the processors of both servers to almost 100%, without query errors.
When 50 real users are working, the response time increases by 1-2 sec compared to zero load, which is still somehow acceptable (there are about 70,000 tickets in the system now). The CPU load is around 50-60% on both servers.
Question: Is OTRS really so greedy? or can it be optimized somehow?
The real cost of renting a cloud turns out to be too high...