Score:0

After establishing WebSocket tunnel, does NGINX continue to 'be in the loop'?

cn flag

I have a WebSocket server-side application fronted by an Nginx reverse proxy and all is working great. The WS app runs in a container as does Nginx, and both then work together as a service.

Now I'm considering the scale-up rules for the WS app, which are more-or-less straight forward. But I'm curious about whether or not I'll need to also scale-up the Nginx portion of the service. Connections will be established at a relatively low rate, so the scale-up portion is really to maintain many already-connected (i.e. long-lived) WS connections. I know I can test some of this myself with load tests, but I figured I'd also ask here: once Nginx reverse-proxies to the WS back-end (via the Upgrade & Connection headers) and the socket is connected between client and my WS app, does Nginx play a role in that continued communication, or is Nginx now 'out-of-the-loop'? I.e. do future packets sent/received (in either direction) get read or handled in any way by the Nginx processes?

If not, then I can likely scale-up the WS containers without needing to scale-up in the Nginx containers in 'lock-step'.

Thanks for any insight!

Lex Li avatar
vn flag
It heavily depends on the rules you write in nginx config, redirect or simply rewrite. Unless redirected, HTTP clients never know the actual upstream server and must go through the nginx instance for all requests.
cn flag
I know the initial request goes through Nginx, but I'm curious about the duplex communication _once the socket is established_, since there are no more HTTP requests being made.
Lex Li avatar
vn flag
Like I said, all requests go through the proxy as long as you configure it there. Please study more from places like https://en.wikipedia.org/wiki/Reverse_proxy
cn flag
Hey thanks for the friendly feedback :-) Once a WebSocket is established, there are no more HTTP requests. Here's a description of sockets for your future use: https://en.wikipedia.org/wiki/WebSocket Thanks again!
Lex Li avatar
vn flag
A note for future readers, to clear all doubt, use a tool like Wireshark to capture HTTP/WS packets yourself and see how everything works under the hood.
Score:0
cn flag

I think the answer fundamentally is, once Nginx proxies (via the protocol Upgrade header) and the socket is established, Nginx does maintain an open file descriptor to both ends of the socket but otherwise acts simply as a NOP filter (passthrough). This allows for quite-impressive scaling with only modest Nginx resources, as demonstrated here. Particularly true for long-lived connections, as CPU load should be negligible (since the rate of newly-created sockets is low) and the memory cost of holding those file descriptors open is quite low and predictably scales with number of sockets.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.