I bumped to (as it now seem to me) routing issue. I can no longer access one of my worker nodes (server) from my master node (server). AFAIK, it has nothing to do with Kubernetes, it leads to pure Linux networking issue.
As the issue is with only one IP, I was troubleshooting iptables, enabled TRACE and realised that packet actualy comes accross master (eth0), gets to iptables (passes: raw > mangle >nat) but when it has to get routed from nat to filter, it just dissapears.
As I understand that is the point in which kernel has to make routing decision. Checked routing and found it's not working for just that one IP (all others from same IP segment are working fine) !?
As I'm with cloud provider, and can't troubleshoot networking, so I tried reinstalling OS (same Ubuntu 20.04) of master node (server). Found out that with fresh OS reinstall, the issue was not present, therefore configuration issue must be within my master Linux server (I reverted the server image back form snapshot).
root@vmi57XXXX:~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gw.provider.net 0.0.0.0 UG 0 0 0 eth0
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
root@vmi57XXXX:~# ip route get xx.xx.xx.96
local xx.xx.xx.96 dev lo src xx.xx.xx.96 uid 0
cache <local>
root@vmi57XXXX:~# ip route get xx.xx.xx.95
xx.xx.xx.95 via xx.xx.xx.1 dev eth0 src xx.xx.xx.95 uid 0
cache
root@vmi57XXXX:~# ip route get xx.xx.xx.97
xx.xx.xx.97 via xx.xx.xx.1 dev eth0 src xx.xx.xx.97 uid 0
cache
root@vmi57XXXX:~# arp -v
Address HWtype HWaddress Flags Mask Iface
10.244.0.60 ether 8a:94:de:43:b6:0f C cni0
10.244.0.63 ether 1e:76:6a:60:27:f3 C cni0
10.244.0.62 ether 36:0b:19:5e:57:87 C cni0
gw.provider.net ether 00:c0:1d:c0:ff:ee C eth0
10.244.0.64 ether 82:03:61:c5:4d:fb C cni0
10.244.0.50 (incomplete) cni0
10.244.1.0 ether 52:3d:a5:f4:c2:2c CM flannel.1
10.244.0.61 ether 56:19:98:79:a1:3a C cni0
Entries: 8 Skipped: 0 Found: 8
root@vmi57XXXX:~# ip netconf show dev eth0
inet eth0 forwarding on rp_filter off mc_forwarding off proxy_neigh off
ignore_routes_with_linkdown off
inet6 eth0 forwarding off mc_forwarding off proxy_neigh off
ignore_routes_with_linkdown off
Any clues about whats going on over there are more than welcome!!!
Thanks
EDIT: After solving the issue it's worth mentioning that this behavior was experienced with Kubernetes 1.21.2-00 and flannel as CNI.
I did the upgrade few weeks ago and this was the first restart of one worker node after the upgrade.