Score:0

Is the outgoing connection from a OpenVPN client to a LAN behind an OpenVPN server forwarded by the server kernel?

cf flag

I've observed a somewhat strange behavior that I can't quite understand. So I set up an OpenVPN connection as shown in the graphic below. (It's a TUN and client-to-client setup). My thoughts are directed towards the route of ping in this scenario: my openvpn connection

 from client: 192.168.200.102 to LAN: 10.198.0.16

In general, it's nothing surprising that this ping is successful, but for my comprehensions, in case when I change my iptables settings on the server to

-P FORWARD DROP

and then even

 net.ipv4.ip_forward = 0.

the traffic should never reach the destination with the above settings. Although the traffic is successful, it kinda never does reach the LAN interface. The thing is I can't see the traffic (by running tcpdump data-network packet analyzer) arriving at the LAN interface eth0 10.198.0.16. More it seems that the tun interface is self answering the traffic, like if the LAN IP was bind to the tun interface, see below:

sudo tcpdump -i tun0 tcpdump: 16:34:21.391381 IP 192.168.200.102 > 10.198.0.16: ICMP echo request, id 14, seq 1885, length 64 16:34:21.391514 IP 10.198.0.16 > 192.168.200.102: ICMP echo reply, id 14, seq 1885, length 64

What's is happening here? As far as I understand, the request coming from the Client goes to tun interface on the server and will be eventually FORWARDED by the kernel to eth0, am I right? Would that normally be visible by running:sudo tcpdump -i tun0 or sudo tcpdump -i eth0?

Why I'm so picky about this thing is that I consider it a security risk if there isn't a way to implement rules to prevent Clients from accessing the LAN on the server. What am I missing here, is there an OpenVPN process that itself forwards packets to the eth0 interface (as intended for client-to-client configuration)?

To let you better help me out of my problem I attached some diagnostic below.

For the Server

  1. ip addr

    `1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
        valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
        valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
        link/ether b8:27:eb:5c:a6:e6 brd ff:ff:ff:ff:ff:ff
        inet 10.198.0.16/24 brd 10.198.0.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::ba27:ebff:fe5c:a6e6/64 scope link 
           valid_lft forever preferred_lft forever
    3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether b8:27:eb:09:f3:b3 brd ff:ff:ff:ff:ff:ff
    4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500
        link/none 
        inet 192.168.200.1/24 scope global tun0
           valid_lft forever preferred_lft forever
        inet6 fe80::87cd:fedd:92fc:cde/64 scope link stable-privacy 
           valid_lft forever preferred_lft forever
    

`

  1. ip route

    default via 10.198.0.1 dev eth0 proto static 
    10.198.0.0/24 dev eth0 proto kernel scope link src 10.198.0.16 
    192.168.200.0/24 dev tun0 proto kernel scope link src 192.168.200.1 
    192.168.178.0/24 via 192.168.200.1 dev tun0 scope link 
    
  2. server openvpn.conf

    tls-server
    mode server
    dev tun
    local 10.198.0.16
    proto tcp-server
    port 1234
    user openvpn
    group openvpn
    ca /etc/openvpn/cacert.pem
    cert /etc/openvpn/servercert.pem
    key /etc/openvpn/serverkey
    dh /etc/openvpn/dh2048.pem
    ifconfig-pool 192.168.200.2 192.168.200.103 255.255.255.0
    client-config-dir /etc/openvpn/ccd
    ifconfig 192.168.200.1 255.255.255.0
    keepalive 10 120
    comp-lzo
    client-to-client
    push "topology subnet"
    topology "subnet"
    log /var/log/openvpn.log
    

For the Client

  1. ip addr

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
        link/ether 38:af:d7:a0:52:ec brd ff:ff:ff:ff:ff:ff
    3: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 00:28:f8:8d:1c:6f brd ff:ff:ff:ff:ff:ff
        inet 192.168.178.79/24 brd 192.168.178.255 scope global dynamic noprefixroute wlp2s0
           valid_lft 859868sec preferred_lft 859868sec
        inet6 2a0a:a540:d54:0:bd79:eb10:5e26:548a/64 scope global temporary dynamic 
           valid_lft 7190sec preferred_lft 3590sec
        inet6 2a0a:a540:d54:0:6086:b044:dff:2694/64 scope global dynamic mngtmpaddr noprefixroute 
           valid_lft 7190sec preferred_lft 3590sec
        inet6 fe80::ad5c:6e18:87fa:dff4/64 scope link noprefixroute 
           valid_lft forever preferred_lft forever
    4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
        link/none 
        inet 192.168.200.102/24 brd 192.168.200.255 scope global tun0
           valid_lft forever preferred_lft forever
        inet6 fe80::5dfc:6b3a:3c4d:e9a4/64 scope link stable-privacy 
           valid_lft forever preferred_lft forever
    
  2. ip route

     default via 192.168.178.1 dev wlp2s0 proto dhcp metric 600 
     169.254.0.0/16 dev wlp2s0 scope link metric 1000 
     10.198.0.0/24 via 192.168.200.1 dev tun0 
     192.168.200.0/24 dev tun0 proto kernel scope link src 192.168.200.102
    
     192.168.178.0/24 dev wlp2s0 proto kernel scope link src 192.168.178.79 metric 600 
    
  3. client openvpn.conf

     dev tun
     client
     nobind
     remote 11.22.33.44
     proto tcp
     port 1234
     ca /etc/openvpn/cacert.pem
     cert /etc/openvpn/user_cert.pem
     key /etc/openvpn/user
     comp-lzo
     verb 3
     keepalive 10 120
     log /var/log/openvpn.log
    
  4. ccd for client

    iroute 192.168.178.0 255.255.255.0
    
Score:0
za flag

The traffic between the VPN and the rest of the network of course is going through tun0. For this traffic, FORWARD chain is consulted as usual and you can control who can connect where. If ip_forward is not enabled, the traffic will not be forwarded.

When client-to-client is not used, the traffic between clients uses the same path: it appears in the server OS from tun0 interface, routes properly using OS routing table, traverses the firewall, and the only difference is that it decides the destination is behind tun0, so packet is egressed through it.

It is not very efficient, because OpenVPN process is in the user space, while tun0 is in the kernel space, and it results in at least two context changes for each packet.

When client-to-client is used, however, packets between clients don't appear on tun0, and server's firewall FORWARD chain is not consulted and the ip_forward control does not influence their forwarding. The OpenVPN process itself becomes a router, with its own routing table, independent from the hosting OS. You can see it with the status command of management interface or dump it into the status file. You can control routes within this "router" with iproute directive (I believe it stands for "internal route"), which is only valid in the client's client-config-dir file or script-generated dynamic configuration.

Easiest is to not to think of VPN as something special. Once the tunnel is established, forget about it, it is now just an additional regular interface in each computer (server and clients), with all those interfaces connected to some regular simple router. And consider usual routing and firewalling.


I finally noticed that you ping the address of the VPN server itself albeit assigned to other interface. This packet is not going to be forwarded anyway because its destination is server itself so the ip_forward doesn't influence how this packet is processed, and it is traversing INPUT firewall chain and the reply will traverse OUTPUT (e.g. not FORWARD chain as they would if they weren't destined to the system itself). The packet will enter the system from tun0 (and will be seen there), but you won't see it on the eth0 because it is not going to be sent away. It will be processed locally. The same is true for replies.

It doesn't matter (to the routing-related code) where on the system the address is assigned (which interface), or which address of the system you use to access it. What matters is whether it belongs to the system or not.


The related security issue is that some people think that if they bind the service to some IP address assigned to some interface they cut the access to this service through other interfaces. This is wrong. If other systems living behind other interfaces have route to the IP where the service is bound, they still will be able to access the service. This is not a correct way to secure the service; proper firewall setup is.

Another related issue is that some people use ping -I <local-address> <address-to-ping> or even ping -I <interface> <address-to-ping> and they think they directly select which interface pings will go out. Again, this is wrong. This way you can only select which source address pings will have, but not the interface to send them; the interface will be selected by the routing code strictly according to the routing table based solely on destination address of the packet (I presume no VRF or RPDB setup was done, but that is advanced stuff and people who set it up know about this feature anyway).

Koala avatar
cf flag
Thanks for the answer. I've heard about it and understand that for efficiency, the connection between VPN clients should go through the VPN process, rather than the system kernel. But in my example, the connection ends up going on the server from tun0 to eth0 (10.198.0.16)! So how one wants to connect eth0 (10.198.0.16) from tun0 on the server without going through the system kernel?
Nikita Kipriyanov avatar
za flag
As explained in the first paragraph, the only way is through the system kernel. You should see the traffic both on `tun0` and `eth0`. If you don't see the traffic or can't block it, probably, you're doing something wrong in that part.
Koala avatar
cf flag
Okay, thanks for the help!
Koala avatar
cf flag
But you know what, I've now checked and on the tun0 interface I can of course see the traffic coming from the client. But I don't see any traffic on the interface to which the LAN IP is located. There must be more behind it, why should the traffic be successful but not visible on the interface!?? And if I disable the client-to-client configuration and suddenly see the pings on the LAN IP interface.. what then, how should I understand that? If you get what I mean, wouldn't that mean that there might not be kernel forwarding? Is something like that possible at all?
Koala avatar
cf flag
Here a snapshot: `sudo tcpdump dst 10.198.0.16 -i tun0 tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on tun0, link-type RAW (Raw IP), snapshot length 262144 bytes 16:29:09.951767 IP 192.168.200.102 > ubuntu: ICMP echo request, id 14, seq 1574, length 64`
Koala avatar
cf flag
Here another snapshot without "dst" option: `16:34:21.391381 IP 192.168.200.102 > ubuntu: ICMP echo request, id 14, seq 1885, length 64 16:34:21.391514 IP ubuntu > 192.168.200.102: ICMP echo reply, id 14, seq 1885, length 64`
Koala avatar
cf flag
Can you see now? It is like tun0 is self answering the ping!? - where ubuntu of course is 10.198.0.16
Nikita Kipriyanov avatar
za flag
It looks like you set up some NAT. Don't hide private network addresses: it doesn't identify you, it doesn't help security, but it does make much harder to understand what you are talking about and to help. If you want such detailed answer please include [into the question](https://serverfault.com/posts/1117753/edit): `ip addr` and `ip route` run on *all three systems involved* (you can mask out public IPs), `iptables-save` on the middle system, and OpenVPN server and client config with keys, certificates and `remote` masked.
Koala avatar
cf flag
Hey thanks for the answer, of course I simply pasted in the result of tcpdump, and as written above "ubuntu" stands for the LAN IP. And all in one my problem is just one thing: why can I ping the LAN address of the OpenVPN server without the system noticing that kernel forwarding is taking place? Is there any mechanism in OpenVPN that allows it to be waived?
Nikita Kipriyanov avatar
za flag
Ah, you pinged from the VPN address of VPN server's other interface? Let me reiterate, had you provided from the beginning all the information I asked for and don't hide it under cryptic "ubuntu" word, I'd answered this right away. Because *it is not being forwarded*. It is not going through `FORWARD` chain, rather, it is routed to localhost and therefore it uses `INPUT` chain (and replies traverse `OUTPUT` consequently). It doesn't make any difference which address of the system you access and which interfaces they are assigned to, all of them are equal, important is which system you access.
Koala avatar
cf flag
Hey, I rewrote my question, edited as you asked, and please let me know, a feedback would be very nice! Also see the new graphic!
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.