The traffic between the VPN and the rest of the network of course is going through tun0
. For this traffic, FORWARD
chain is consulted as usual and you can control who can connect where. If ip_forward
is not enabled, the traffic will not be forwarded.
When client-to-client
is not used, the traffic between clients uses the same path: it appears in the server OS from tun0
interface, routes properly using OS routing table, traverses the firewall, and the only difference is that it decides the destination is behind tun0
, so packet is egressed through it.
It is not very efficient, because OpenVPN process is in the user space, while tun0 is in the kernel space, and it results in at least two context changes for each packet.
When client-to-client
is used, however, packets between clients don't appear on tun0
, and server's firewall FORWARD
chain is not consulted and the ip_forward
control does not influence their forwarding. The OpenVPN process itself becomes a router, with its own routing table, independent from the hosting OS. You can see it with the status
command of management interface or dump it into the status file. You can control routes within this "router" with iproute
directive (I believe it stands for "internal route"), which is only valid in the client's client-config-dir
file or script-generated dynamic configuration.
Easiest is to not to think of VPN as something special. Once the tunnel is established, forget about it, it is now just an additional regular interface in each computer (server and clients), with all those interfaces connected to some regular simple router. And consider usual routing and firewalling.
I finally noticed that you ping the address of the VPN server itself albeit assigned to other interface. This packet is not going to be forwarded anyway because its destination is server itself so the ip_forward
doesn't influence how this packet is processed, and it is traversing INPUT
firewall chain and the reply will traverse OUTPUT
(e.g. not FORWARD
chain as they would if they weren't destined to the system itself). The packet will enter the system from tun0
(and will be seen there), but you won't see it on the eth0
because it is not going to be sent away. It will be processed locally. The same is true for replies.
It doesn't matter (to the routing-related code) where on the system the address is assigned (which interface), or which address of the system you use to access it. What matters is whether it belongs to the system or not.
The related security issue is that some people think that if they bind the service to some IP address assigned to some interface they cut the access to this service through other interfaces. This is wrong. If other systems living behind other interfaces have route to the IP where the service is bound, they still will be able to access the service. This is not a correct way to secure the service; proper firewall setup is.
Another related issue is that some people use ping -I <local-address> <address-to-ping>
or even ping -I <interface> <address-to-ping>
and they think they directly select which interface pings will go out. Again, this is wrong. This way you can only select which source address pings will have, but not the interface to send them; the interface will be selected by the routing code strictly according to the routing table based solely on destination address of the packet (I presume no VRF or RPDB setup was done, but that is advanced stuff and people who set it up know about this feature anyway).