Score:1

netplan routing in ubuntu 22.04 gone bonkers

er flag

Sorry if this superlong, i have a remote PC running ubuntu 22.04 server that has two services on it, ssh and xrdp, i use it to connect to a remote office using checkpoint VPN, anyways, everything works well, but to keep connection to the local network (192.168.50.0/24) it requires that some commands must be inputed before the VPN connection starts.

sudo ip rule add table 128 from 192.168.50.215 
sudo ip route add table 128 to 192.168.50.0/24 dev ens33
sudo ip route add table 128 default via 192.168.50.1

Some information needed to be taken into account is that this remote pc obtains its IP thru dhcp4 but it has an IP assigned 192.168.50.215 and the gateway is 192.168.50.1.

The problem here is that sometimes when the VPN is connected and there's some kind of network hiccup (like for example the link goes down and up due to the service provider), the rule and route disappear and then it becomes unreachable thru ssh nor RDP, and the only way to fix it, is to input those commands again in the console (TTY).

So i've been looking for a solution, and i googled a lot and found something that supposedly should work configuring netplan so it would install those lines when the interface goes up and down (including at boot).

I have followed this question: [Reproducing a set of ip commands in netplan][1] but it is kind of old.

I have concocted a 00-network-config.yaml and added the following:

> # This is the network config written by 'subiquity'
network:
  renderer: networkd
  ethernets:
    enp2s1:
      dhcp4: true
      routing-policy:
        - from:  192.168.50.215
          table:  128
      routes:
        - to: default
          via: 192.168.50.1
        - to: 192.168.50.0/24
          via: 192.168.50.1
          table: 128
          on-link: True
        - to: 0.0.0.0/0
          via: 192.168.50.1
          table: 128
          on-link: True
  version: 2

I have tried other variations without the first route (default one that goes to main route table).

But when i get this live (using netplan apply etc.) i have the following problem:

  1. Ping from any PC in the 192.168.50.0/24 to 192.168.50.215 (the remote pc) works

  2. Ping from 192.168.50.215 to anything in 192.168.50.0/24 works too

  3. SSH from 192.168.50.215 to anything works.

  4. SSH from 192.168.50.0/24 (like for example 192.168.50.214) to 192.168.50.215 wont work and i get this:

    kex_exchange_identification: read: Connection reset by peer Connection reset by 192.168.50.215 port 22

  5. Any RDP connection wont work either.

At this point, i thought there might be some kind of firewall blocking stuff, but UFW is inactive, and iptables -L gives everything open, so it doesn't seem to be blockage, i read some where that this could be a problem of routing but i dont know.

To gather more information about this i collected some information about the routing and it is as follows:

With DHCP no routes and writing the ip rule/route manually i get this:

rule show
ip rule show
0:  from all lookup local
32765:  from 192.168.50.215 lookup routing
32766:  from all lookup main
32767:  from all lookup default

route table routing - 128
ip route show table 128
default via 192.168.50.1 dev ens33 
192.168.50.0/24 dev ens33 scope link 

route table local
ip route show table local
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 
local 192.168.50.215 dev ens33 proto kernel scope host src 192.168.50.215 
broadcast 192.168.50.255 dev ens33 proto kernel scope link src 192.168.50.215 

route table main
ip route show table main
default via 192.168.50.1 dev ens33 proto dhcp src 192.168.50.215 metric 100 
192.168.50.0/24 dev ens33 proto kernel scope link src 192.168.50.215 metric 100 
192.168.50.1 dev ens33 proto dhcp scope link src 192.168.50.215 metric 100 

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:45:3b:9f brd ff:ff:ff:ff:ff:ff
    altname enp2s1
    inet 192.168.50.215/24 metric 100 brd 192.168.50.255 scope global dynamic ens33
       valid_lft 47843sec preferred_lft 47843sec
    inet6 fe80::20c:29ff:fe45:3b9f/64 scope link 
       valid_lft forever preferred_lft forever

With the netplan file shown before here i get:

rule show
ip rule show
0:  from all lookup local
32765:  from 192.168.50.215 lookup routing proto static
32766:  from all lookup main
32767:  from all lookup default

route table routing - 128
ip route show table 128
default via 192.168.50.1 dev ens33 proto static onlink 
192.168.50.0/24 via 192.168.50.1 dev ens33 proto static onlink 

route table local
ip route show table local
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 
local 192.168.50.215 dev ens33 proto kernel scope host src 192.168.50.215 
broadcast 192.168.50.255 dev ens33 proto kernel scope link src 192.168.50.215 

route table main
ip route show table main
default via 192.168.50.1 dev ens33 proto static onlink 
default via 192.168.50.1 dev ens33 proto dhcp src 192.168.50.215 metric 100 
192.168.50.0/24 dev ens33 proto kernel scope link src 192.168.50.215 metric 100 
192.168.50.1 dev ens33 proto dhcp scope link src 192.168.50.215 metric 100 

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:45:3b:9f brd ff:ff:ff:ff:ff:ff
    altname enp2s1
    inet 192.168.50.215/24 metric 100 brd 192.168.50.255 scope global dynamic ens33
       valid_lft 86109sec preferred_lft 86109sec
    inet6 fe80::20c:29ff:fe45:3b9f/64 scope link 
       valid_lft forever preferred_lft forever

So mostly the same, and i'm at my wits ends about this, so if someone could lend me a hand, i feel my netplan yaml should work, but alas it doesn't.

I havent seen anything interesting in dmesg and journalctl but if anyone knows what to do about this let me know cos i am about to give up with ubuntu server. PD: if someone mentiones the on-link on the yaml, you could read the link i posted here, because without it the routes wouldn't register at all. [1]: Reproducing a set of ip commands in netplan

Score:0
fi flag

I'm not sure your Netplan config is broken. The behavior you describe would be expected, because SSH runs on TCP, and therefore does error checking of lost packets. You might consider attacking this problem from another angle -- make your remote shell connection more robust to outages.

For starters, SSH has some tricks. Adding the following might help by keeping your ssh connections socket open longer.

# </home/User/.ssh/config>

Host *

  ControlMaster auto
  ControlPath /tmp/%r@%h:%p
  ControlPersist 4h

  #-- If the network disappears your connection will hang, 
  #-- but if it then re-appears with 10 minutes it will resume working. 
  
  TCPKeepAlive no
  ServerAliveInterval 60
  ServerAliveCountMax 10

You may also consider a remote terminal other than ssh. Mosh, for example was designed specifically for this purpose.

... Description: Mobile shell that supports roaming and intelligent local echo Mosh is a remote terminal application that supports:

  • intermittent network connectivity,
  • roaming to different IP address without dropping the connection, and
  • intelligent local echo and line editing to reduce the effects of "network lag" on high-latency connections. Homepage: https://mosh.org

Finally, using a mux system on the server side will persist your session despite being disconnected. Any tasks running while your connection is interrupted will continue to run in the background. This is in contrast to a normal process being run over SSH, which will be killed when the connection disconnects. I'm a fan of byobu, for this purpose, but your mileage may vary.

Description: text window manager, shell multiplexer, integrated DevOps environment Byobu is Ubuntu's powerful text-based window manager, shell multiplexer, and integrated DevOps environment.

Using Byobu, you can quickly create and move between different windows over a single SSH connection or TTY terminal, split each of those windows into multiple panes, monitor dozens of important statistics about your system, detach and reattach to sessions later while your programs continue to run in the background. Homepage: http://byobu.org

TL;DR

Your OP doesn't say whether you are using SSH to tunnel the XRDP connection. If this is the case, you might consider something like Zebedee instead. It is another tunneling protocol that many years back I used similarly to tunnel VNC. Its advantage over SSH is that it doesn't require a persistent connection, which could help in your situation. However, it requires compilation from code, and therefore not the easiest solution.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.