Score:0

Can Load balancers do balancing of multiple requests received on the same TCP connection?

br flag

(Below is a hypothetical scenario and this question came to my mind when I found out about complexities of gRPC load balancing and how its not efficient due to long lived TCP connection)

Imagine there is a HTTP client that maintains a single TCP long lived connection for making HTTP 1.1 requests to an API service sitting behind HA proxy. There are multiple redundant servers part of the service and I want to balance all the requests on the same TCP connection across different upstream servers. Is something like this possible?

I understand that the client can have multiple connections open to the LB which will help the cause.

But I'm wondering if it's possible to do this directly using some HA proxy configuration. If not, why is such a feature missing? Is it due to some sort of networking/HTTP protocol limitation? Or such a feature was never needed because there are other workarounds?

Let's assume TLS is terminated at HA proxy so it can do L7 routing. Also HAProxy is just an example, any other LB’s/Proxies like Envoy, Nginx that's capable of this feature?

Score:1
la flag

Good question.

With a layer 7 HTTP(S) load balancer, you get often a "reverse proxy" style load balancer. In that case you get (at least) two independent connections; the TCP connection with the client gets terminated on the reverse proxy and the reverse proxy established and maintains its own connection (pools) to the back-end servers.

The client exchanges HTTP(S) protocol messages with the reverse proxy and reverse proxy subsequently exchanges it's own HTTP(S) protocol messages with the back-end servers.

In such a setup there is a priori no reason why subsequent HTTP(S) requests from the same client over the same TCP connection can't be load-balanced over multiple back-ends.

Be warned that special considerations may apply when you have to deal with one of the many other protocols that get encapsulated in HTTP(S). And regardless, often the preferred configuration is some form of session persistence/stickiness ensuring that subsequent requests from the same client do go the same back-end server.

Everything depends of course on how you configure and tune your loadbalancer ...

https://docs.haproxy.org/2.7/configuration.html#4

In layer 7 mode, HAProxy analyzes the
protocol, and can interact with it by allowing, blocking, switching, adding,
modifying, or removing arbitrary contents in requests or responses, based on
arbitrary criteria.

In HTTP mode, the processing applied to requests and responses flowing over
a connection depends in the combination of the frontend's HTTP options and
the backend's. HAProxy supports 3 connection modes :

  - KAL : keep alive ("option http-keep-alive") which is the default mode : all
    requests and responses are processed, and connections remain open but idle
    between responses and new requests.

  - SCL: server close ("option http-server-close") : the server-facing
    connection is closed after the end of the response is received, but the
    client-facing connection remains open.

  - CLO: close ("option httpclose"): the connection is closed after the end of
    the response and "Connection: close" appended in both directions.

The effective mode that will be applied to a connection passing through a
frontend and a backend can be determined by both proxy modes according to the
following matrix, 

...
etc. etc.
...

and many more complicated options.

ns94 avatar
br flag
You are right. I was able to test this with HAProxy and this seems to be the default behavior with http mode. (My previous tests that I did long back with HAproxy probably was not configured/done right, hence the confusion.)
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.