Score:0

Load test (single client vs thousands of clients)

dj flag

Hello Serverfault community,

I am currently working on a .NET Web API that utilizes MS SQL as its backend to process data and return straightforward results. To ensure the API's performance and reliability, I've been conducting load tests using Sendgrid Loader.io.

During these load tests, I aim to handle around 3500 requests in parallel, and according to Loader.io, this seems achievable. The load tests, however, are performed with 3 or 4 clients (servers) simulating the concurrent requests.

My question is whether this load testing methodology will have the same result on my server when done my 3500 different clients?

Score:1
in flag

As a first approximation: yes, such a successful simulation will give good idea that your API won't immediately crumble under the designed load.

All other things being equal, the theory is that a single load generator node making X number of independent concurrent connections to your server (where X is less than 64k) and running 1 request over each connection is no different than X concurrent nodes making a single connection each to run their request. As long as there's sufficient resources (bandwidth, CPU, memory etc.) on the nodes simulating the client load, using more nodes to simulate load does not necessarily mean that your results will better represent real world load.

But by definition a simulation is only an approximation of reality and passing your simulated test scenario(s) does not guarantee that in the real world and under actual load your API will never have issues.

As far as I see loader.io is fairly straightforward in what tests its supports and for example it does not simulate "bad" clients and sub-optimal connectivity scenarios where ( a certain percentage of) the clients have high latency and/or low-bandwidth connections. Many such clients will typically result in connections between the client and server having to remain open for much longer which may result in resource starvation on the server for example.

( A completely different consideration is the quality and quantity of your test dataset and how well your load test simulates real world usage. When API requests can generate large result sets, are you using pagination and does your load test also test the expense/performance of pagination and returning a small random subset out of sufficiently large results? And there are many many other considerations which make testing as much an art as it is science.)

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.