Score:0

Log the Network Usage of a Linux Process?

in flag

I'm looking for a way to log the network traffic of a single Linux process (on Ubuntu, but will look at other flavours too). I'm after something like tcpdump but for a process rather than the whole system or a network interface. Is there such a thing?

The back-story here is that I'm looking to validate that our software (which is built using a handful of libraries) is NOT "phoning home" - be that to us as the software developer, or to any other developers (perhaps the library developers). My intent here is to be able to run my process via this tool, and have it log all network requests while we exercise our application as much as we can. We can then review the network logs to make sure nothing in there is unexpected.

I've looked at iftop, nethogs and iptraf, and of course tcpdump. All are fine for what they do, but they don't operate on a single process. I'm aware that there are some options if I run my process in a container first, but it would be simpler if I could avoid needing containers to do this.

Score:0
vn flag

If you run the application as a unique user you can use netfilter.

iptables -A OUTPUT -m owner --uid-owner 500 -j ULOG --ulog-nlgroup 1

Use --ulog-cprange if you don't want the whole packet.

You'll need to configure ulogd to receive the logs. I won't give a complete description of ulogd configuration but note that it should use the same netlink multicast group (--ulog-nlgroup) as the iptables command (1 in the example above). You'll probably want to use the ulogd_output_PCAP output plugin.

in flag
Thanks for this idea - it looks like `-j ULOG` has been deprecated in favour of `-j NFLOG` (but still uses `ulogd`), and is generally quite fiddly and not terribly well documented (presumably it's not a very commonly required task!).
Score:0
in flag

I originally stated I didn't want to use containers, but found the iptables/ulogd solution to be quite fiddly to get it to work. Since it may not just be me doing this in future, I did eventually elect to use containers to do it.

In my case, I wanted to test a UI and an API, which use an nginx and mysql running on the host itself. Getting traffic from nginx to the container was pretty easy (using port mapping). Getting traffic out of the UI container into the API container is pretty easy (use standard docker networking, where the other container's port appears on localhost).

Getting from the API to mysql proved to be trickier. Docker really doesn't want to let containers talk to localhost on their host. It will quite easily talk to the network-facing interface though (via the 172.17.0.x IP addresses assigned to Docker containers). Making mysql listen on the network interface instead of localhost completes the "circuit".

Once it's all working, it's easy to see all the Docker traffic by watching the Docker interface tcpdump -ni docker0.

It seems then, there are no quick and simple solutions to this problem, and only a couple of potential solutions.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.