10000 clients/second?
What order of magnitude of requests do you actually expect to get? Stack Exchange, the entire network, peaks at maybe 5,000 requests per second for ~1.3 billion page views per month. A direct comparison is difficult, but I assume what you are doing is smaller than that.
the available capacity of the RAM is not really used.
You are correct that much of your memory is not used at this point in time. 50683 MB free is a lot in both absolute number, and as 78% of 64306 MB total. Whether to treat this is as wasteful, generous allocation for growth, or one size fits most 64 GB servers, is your judgement call, as a part of capacity planning.
Sixty (?) web server processes plus some other odds and ends is no big deal for 64 GB of memory. Notice RES of about 150 MB each. While in theory that could sum to 5,000 MB, memory accounting is always more complicated than the simple assumptions people make. Linux is lazy and tends to allocate physical memory pages for unique data. Dozens of copies of the same task "compresses" well. Especially for computational benchmark workloads, which likely can drive up CPU with a small working set.
With plenty of free memory, but poor response time to an application, there definitely is other limiting factors to performance. Finding it requires a methodical approach to examining all aspects of the system. Anything from php tuning parameters, to other resources (network?), to application concurrency issues.
As a practical matter, on Linux install debug symbols and run perf top
Knowing which functions are on CPU can help analyze what is going on deep in user or kernel code. Ideally you also have something like an APM tool that can profile code.