We are currently working with an academic network protocol that modifies and partly encrypts IPv6 packets and establishes circuits to allow sourceless routing.
We got the prototype running, and it works with IPv6 messages if we put the message payload directly in to IP packet payload (e.g. send a hello world).
We can, however, not use well-established tools as ping or iperf3, as the messages receive the destination, but no replies are sent.
We are wondering whether we can benchmark some features of the prototype.
As far as I see it, it does not make any sense to benchmark packet loss, as the protocol itself does not introduce reasons from packet loss other than a node on the route being taken offline.
Also, it does not really seem to make sense to measure data throughput, as this is subject to the link between the two parties.
The protocol itself also does not introduce reasons for jitter, because all messages are handled the same, thus again this would be a network-related attribute.
The latency is also mostly due to network-related issues, but what we could measure is the time the prototype needs to modify a message.
Currently, we are running it on VMs. It uses iptables rules to intercept packets and pass them on to nfqueue, which modifies the packets using python.
I proposed to do a theoretical analysis instead were we calculate the additional bytes that are added on top of regular IPv6-packets, try to calculate the additional performance costs (how?), and try to narrow done, which attacks are feasible and which not, in respect to regular IPv6.
- What features make sense to benchmark?
- Apart from packet size and performance costs, what else could be theoretically analyzed?
P.S.: I hope it fits into this caytegory, since it does not seem to fit into network engineering