Score:0

WireGuard Port-forwarding from Client in the Host

us flag

i'm trying to connect a port from a wireguard client to the host network of the server.

I tried to do it with IPtables but I always get the reply of "unreachable"

It seems that my configuration could be the problem?

Thanks!


TEST Connection

root@wiretest3:~# curl -I 10.7.0.2:6060
HTTP/1.1 200 OK
Server: nginx/1.20.1
Date: Sun, 18 Jul 2021 10:37:38 GMT
Content-Type: text/html
Content-Length: 988
Last-Modified: Sat, 17 Jul 2021 10:07:05 GMT
Connection: keep-alive
ETag: "60f2abc9-3dc"
Accept-Ranges: bytes

root@wiretest3:~# curl -I 192.168.1.180:6060
curl: (28) Failed to connect to 192.168.1.180 port 6060: Connection timed out
root@wiretest3:~# curl -I 127.0.0.1:6060
curl: (7) Failed to connect to 127.0.0.1 port 6060: Connection refused

Server Config:

Host: 192.168.1.183 Wireguard network: 10.7.0.1

root@wiretest3:~# cat /etc/wireguard/wg0.conf
# Do not alter the commented lines
# They are used by wireguard-install
# ENDPOINT wireguard.demo.net

[Interface]
Address = 10.7.0.1/24
PrivateKey = QAOETAJYMK3PcDhN/y+xFJKcJetm4...........
ListenPort = 51823

# BEGIN_PEER client
[Peer]
PublicKey = YxM7cwbmBm7VIyNcRdDBhtiEwFWL........
PresharedKey = W9Y0qCku0Fv1uFiMpy5ImStbs+.........
AllowedIPs =  10.7.0.2/32, 192.168.1.183/32
# END_PEER client

IP a

root@wiretest3:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if47: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2e:f5:1e:38:32:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.183/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2cf5:1eff:fe38:3206/64 scope link 
       valid_lft forever preferred_lft forever
3: wg0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1420 qdisc mq state UNKNOWN group default qlen 500
    link/none 
    inet 10.7.0.1/24 scope global wg0
       valid_lft forever preferred_lft forever
    inet6 fe80::6613:2cc4:bb7d:6bd4/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever

IPtables Rules:

    iptables -P FORWARD DROP;
    iptables -A FORWARD -i eth0 -j ACCEPT;
    iptables -t nat -A PREROUTING -p tcp --dport 6060:6060 -j DNAT --to-destination 10.7.0.2;
    iptables -w -t nat -A POSTROUTING -o eth0 -j MASQUERADE;

IPtables: (iptables-save)

root@wiretest3:~# iptables-save
# Generated by iptables-save v1.8.7 on Sun Jul 18 13:17:28 2021
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -p udp -m udp --dport 51823 -j ACCEPT
-A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.7.0.0/24 -j ACCEPT
-A FORWARD -i eth0 -j ACCEPT
COMMIT
# Completed on Sun Jul 18 13:17:28 2021
# Generated by iptables-save v1.8.7 on Sun Jul 18 13:17:28 2021
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A PREROUTING -p tcp -m tcp --dport 6060 -j DNAT --to-destination 10.7.0.2
-A POSTROUTING -s 10.7.0.0/24 ! -d 10.7.0.0/24 -j SNAT --to-source 192.168.1.183
-A POSTROUTING -o eth0 -j MASQUERADE
COMMIT
# Completed on Sun Jul 18 13:17:28 2021
root@wiretest3:~# 

IPtables: iptables -L -n -t nat (now)

root@wiretest3:~# sudo iptables -L -n -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:6060 to:10.7.0.2

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
SNAT       all  --  10.7.0.0/24         !10.7.0.0/24          to:192.168.1.183
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0 

Client Config:

root@wiredocker:/etc/wireguard# cat /etc/wireguard/wg0.conf
[Interface]
Address = 10.7.0.2/24
DNS = 8.8.8.8, 8.8.4.4
PrivateKey = GAF31cqwu2YSWQPdiSvlWie2Pma.........

[Peer]
PublicKey = 3VMnaI8JvoXZ6DthLcDy5MnVmNq..............
PresharedKey = W9Y0qCku0Fv1uFiMpy5ImStbs+...............
AllowedIPs = 0.0.0.0/0, ::/0, 192.168.1.0/24
Endpoint = wireguard.demo.net:51823
PersistentKeepalive = 25

IP a

root@wiredocker:/etc/wireguard# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 76:d3:5b:64:b4:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.178.178/24 brd 192.168.178.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::74d3:5bff:fe64:b4f0/64 scope link 
       valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:bb:9b:28:90 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:bbff:fe9b:2890/64 scope link 
       valid_lft forever preferred_lft forever
10: veth508c767@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether ea:cd:96:6e:33:0b brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::e8cd:96ff:fe6e:330b/64 scope link 
       valid_lft forever preferred_lft forever
15: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet 10.7.0.2/24 scope global wg0
       valid_lft forever preferred_lft forever
us flag
@MichaelHampton added, sorry.
Tom Yan avatar
in flag
Always use `iptables-save` to share your rules
Tom Yan avatar
in flag
I suggest you leave table `filter` for now. Better get `nat` working as you desired first.
us flag
@TomYan now ist iptables-save added, table filter ?
Score:0
in flag

Turns out doing DNAT like that for 127.0.0.1 appears to be infeasible (well, at least for the case of OUTPUT), probably due to the fact that 127.0.0.0/8 are of scope host or similar reasons.

However, the following nftables ruleset should help you achieve what you want:

table ip rewrite {
        chain unloop {
                type route hook output priority filter; policy accept;
                ip daddr 127.0.0.1 tcp dport 80 ip daddr set 10.7.0.2 ip saddr set 10.7.0.1
        }

        chain reloop {
                type filter hook input priority 101; policy accept;
                ip saddr 10.7.0.2 tcp sport 80 ip saddr set 127.0.0.1 ip daddr set 127.0.0.1
        }
}
table ip realnat {
        chain dest {
                type nat hook output priority filter; policy accept;
                ip daddr 192.168.1.183 tcp dport 80 dnat to 10.7.0.2
        }

        chain source {
                type nat hook postrouting priority srcnat; policy accept;
                ip daddr 10.7.0.2 tcp dport 80 snat to 10.7.0.1
        }
}

Instead of doing "proper" destination NAT, you need to instead do sort of an "untracked" rewrite of the destination (and the source) address. With the help of a type route chain, a new route lookup will be performed. Additionally, you'll need to make the replying traffics have their source and destination addresses change back to 127.0.0.1 so that curl can recognize the traffics.

With hook input, it should (i.e. NOT TESTED) avoid unwanted rewrite for traffics that are not for the host itself (i.e. replying traffics for forwarded traffics). On the other hand, priority 101 (i.e. 1 larger than srcnat / all standard priorities) will avoid unwanted rewrite to replies that are responding to requests that have been properly NAT'd.

As you can see, for the 192.168.1.183 case, another table that does normal NAT'ing follows the one for the 127.0.0.1 special case.

Note that this ruleset is only for curling from inside the container (or at most, its host; I'm not familiar with containers and as far as I know, there can be different networking approach for them). If you for example need the container to forward for some other hosts in 192.168.1.0/24, you'll additionally need the same dnat rule in a chain of type nat hook prerouting priority dstnat. IP forwarding will need to be enabled and allowed as well. And as I said, I'm NOT sure if the rewriting tricks above for 127.0.0.1 will conflict with that.

us flag
iptables-restore-translate -f rules.txt give me "iptables-translate-restore: line 1 failed"
Tom Yan avatar
in flag
`nft -f rules.txt`
us flag
thats run good !!, how can i use this for more ports ? example 80,6060,6061,6062, i need do this rules per port or can i do for multipleports ?
us flag
when i try to call 192.168.1.183:6060 from another pc in same network that is again unreachable :/
Tom Yan avatar
in flag
Please read the answer more carefully. I've already mentioned what needs to be done additionally for that case. Adapt/extend the ruleset for further needs. Note that chains cannot have same name in the same table. Also, before you read the new ruleset file with `nft -f`, run `nft flush ruleset` to clear the old one. You can make the rules to apply on more ports as well. IIRC you can use something like `{80, 6060-6062}`. Try to find/read some documentations of nftables. Skim through `man nft` should help.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.