As a first approximation: yes, such a successful simulation will give good idea that your API won't immediately crumble under the designed load.
All other things being equal, the theory is that a single load generator node making X
number of independent concurrent connections to your server (where X
is less than 64k) and running 1 request over each connection is no different than X concurrent nodes making a single connection each to run their request. As long as there's sufficient resources (bandwidth, CPU, memory etc.) on the nodes simulating the client load, using more nodes to simulate load does not necessarily mean that your results will better represent real world load.
But by definition a simulation is only an approximation of reality and passing your simulated test scenario(s) does not guarantee that in the real world and under actual load your API will never have issues.
As far as I see loader.io
is fairly straightforward in what tests its supports and for example it does not simulate "bad" clients and sub-optimal connectivity scenarios where ( a certain percentage of) the clients have high latency and/or low-bandwidth connections. Many such clients will typically result in connections between the client and server having to remain open for much longer which may result in resource starvation on the server for example.
( A completely different consideration is the quality and quantity of your test dataset and how well your load test simulates real world usage. When API requests can generate large result sets, are you using pagination and does your load test also test the expense/performance of pagination and returning a small random subset out of sufficiently large results? And there are many many other considerations which make testing as much an art as it is science.)