I have a mininet topology that consists of two hosts and two ovs switches. h1-eth0 is connected to s1-eth1 and h2-eth0 is connected to s2-eth2 and s1-eth2 and s2-eth1 is also connected. h1 is for client and h2 is for server. In flows, ARP packets are flooded. In s1, outgoing h1 packets are pushed with mpls label (i.e.55), and in s2, mpls labels popped and directed to h2. h2 outgoing packets does not a mpls implementation, they transported as ip packets. The flow tables are like this:
s1 Flow Table:
cookie=0x0, duration=0, table=0, n_packets=0, n_bytes=0, priority=1, ip, in_port="s1-eth1" actions=push_mpls:0x08847,set_field:55->mpls_label,output:"s1-eth2"
cookie=0x0, duration=0, table=0, n_packets=0, n_bytes=0, priority=1,
ip, in_port="s1-eth2" actions=output:"s1-eth1"
cookie=0x0, duration=0, table=0, n_packets=0, n_bytes=0, priority=0, arp actions=FLOOD
s2 Flow Table:
cookie=0x0, duration=0, table=0, n_packets=0, n_bytes=0, priority=1, ip, in_port="s2-eth2" actions=output:"s2-eth1"
cookie=0x0, duration=0, table=0, n_packets=0, n_bytes=0, priority=1, mpls, in_port="s2-eth1",mpls_label=55,mpls_bos=1 actions=pop_mpls:0x0800,output:"s2-eth2"
cookie=0x0, duration=0, table=0, n_packets=0, n_bytes=0, priority=0, arp actions=FLOOD
When I test this testbed with iperf (h2 is client and h1 is server), I get normal bandwidth (i.e. 20GBit). But, when I use h2 as server and h1 is client, I get very limited bandwidth (such as 400 KBit). I really want to know what causes this, but I have no idea.
Appreciate for help.