I have some production code for a distributed system. Part of this code works by looking up (already established tcp) connections in a hash table, indexed by remote ip address. This code works correctly in a real cluster, with each server running on a unique host.
I am writing a test harness for this code; I initially setup the cluster by having each server run on INADDR_LOOPBACK + a unique port. This works with most of the codebase, except the codepath (i mention above) that looks up connections by ipaddr -- what ends up happening is that the lookup just finds the first connection in the table (since all open conns are to the same ipaddr), and sends a message to an incorrect "server".
I learned that localhost has a range of ips, 127.0.0.0
to 127.255.255.255
. I modified my test setup such that each server is assigned a unique ipaddr from this range. I expected this to function similarly to running a real cluster, if, to each server, its peer is connecting from a different ipaddr.
Sadly, tcp connect
/accept
do not have an option to specify this distinction -- i.e., used normally, the server-side call to accept
a connection always populates the sockaddr
with INADDR_LOOPBACK (127.0.0.1
).
Is there a way for connect
to signal + accept
to recognize that the connection came from a different ip, though they all map to the same LOOPBACK interface?
Please state in the comments if I can clarify the question. I am unable to share the code for IP reasons.
P.S.: An alternative approach is to plumb the clientside-port into the request, and do the lookup by ip + port. However, this involves modifying the code heavily, which carries risks I'd prefer to avoid. Additionally, the client port is ephemeral, which i expect to cause test flakiness, though I haven't tested this approach. It is my fallback, in case I cannot get the unique-ip approach to work.