Score:0

UDP servers High Availability allow random source IP response

jp flag

My system is composed by a NGINX configured as load balancer waiting for UDP datagrams.

The client open an UDP socket, assigning a random source port, and sends a request, waiting for a response.

Say request datagram source/destination is configured as SOURCE_IP,RANDOM_SOURCE_PORT -> NGINX_IP, NGINX_PORT.

NGINX routes the datagram to one of the node but, by design, in order to implement HA, the other server could send the response. Keep in mind that the two nodes are synchronized using a cache that stores the source IP and port (the NGINX ones, as it act as a proxy).

Now the request datagram received by the first server is NGINX_IP, NGINX_RANDOM_SOURCE_PORT -> SERVER1_IP, SERVER_PORT.

The response datagram is SERVER2_IP, SERVER_PORT -> NGINX_IP, NGINX_RANDOM_SOURCE_PORT.

NGINX doesn't seem to route the message back to the client.

Can I configure NGINX in order to route the response UDP datagram back to the source through NGINX ignoring the response source IP and relying only to the destination port (NGINX_RANDOM_SOURCE_PORT)? Or should I use another component between NGINX and the application nodes? What's the architecture concept I'm missing?

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.