I create the following setup using docker containers
- Docker host: physical machine that runs the docker containers- 
- openvpn: container that runs openvpn and automatically connects to a VPN.
- nzbget: container running nzbget, webinterface accessible on port- <docker-host-ip>:6789
 
Facts:
- I can access all the containers using the docker host IP.
- The openvpn connection is successfully establishing a VPN connection.
- The nzbgetis up and running.
The second stage of my plan is to route the traffic of the nzbget container through the openvpn container. I successfully achieve this by adding the following line to the docker-compose.yaml of the nzbget container:
...
network_mode: "container:openvpn"`
...
After this requesting ipinfo.io shows me the IP of the VPN. But after this I lose access to the nzbget webinterface via <docker-host-ip>:6789 (connection timed-out). When I (for testing purposes) don't let the openvpn container establish a VPN connection and restart both containers I am able to connect to the nzbget interface using <docker-host-ip>:6789. So I seems like the VPN connection itself is preventing me to connect to the nzbget container.
Now the question is: how can I configure openvpn in such way that it will allow me to connect to my nzbget webinterface using <docker-host-ip>:6789.
I noticed looking at the docker logs of the openvpn container that when the VPN connection is established some routes are added:
...
2021-11-16 19:03:38 TUN/TAP device tun0 opened
2021-11-16 19:03:38 /sbin/ip link set dev tun0 up mtu 1500
2021-11-16 19:03:38 /sbin/ip link set dev tun0 up
2021-11-16 19:03:38 /sbin/ip addr add dev tun0 10.7.2.7/24
2021-11-16 19:03:38 /sbin/ip route add <vpn-ip-address>/32 via 172.18.0.1
2021-11-16 19:03:38 /sbin/ip route add 0.0.0.0/1 via 10.7.2.1
2021-11-16 19:03:38 /sbin/ip route add 128.0.0.0/1 via 10.7.2.1
...
I think the misconfiguration/conflicting ip routes is what causing my issues. So here some additional information.
Interfaces on the openvpn container:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/[65534] 
    inet 10.7.2.7/24 scope global tun0
       valid_lft forever preferred_lft forever
85: eth0@if86: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
       valid_lft forever preferred_lft forever
List of IP routes when vpn is active on openvpn container:
root@cf64c3dd2846:/# ip route list
0.0.0.0/1 via 10.7.2.1 dev tun0 
default via 172.18.0.1 dev eth0 
10.7.2.0/24 dev tun0 scope link  src 10.7.2.7 
<vpn-ip-address> via 172.18.0.1 dev eth0 
128.0.0.0/1 via 10.7.2.1 dev tun0 
172.18.0.0/16 dev eth0 scope link  src 172.18.0.2 
List of IP routes when vpn is not active on openvpn container (in this situation i can access the nzbget web interface):
default via 172.18.0.1 dev eth0 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2
Do anybody know of an IP route rule I can apply to allow to reach the nzbget webinterface? While having an active VPN connection? For example it there a way to allow (only) incoming traffic on port 6789 to be routed "normally" (not over VPN)
Thanks for your help!