I am trying to setup simple kubernetes cluster on our own infrastructure. We've decided to use k3s and it seems that kube-vip could fit our needs as control plane and services balancer. I am starting with 5 VMs: 3 for control plane and 2 workers.
The issue I am encountering with kube-vip is that all the VMs have only private IPs (172.16.30.x/24 with gateway 172.16.30.1) while I need kube-vip to assign public IPs (i.e. 1.2.3.x/25 with gateway 1.2.3.1). I have only a limited number (cca 3) of public IPs that I can use for this project.
As I understand it it would be no problem if every cluster node had its "normal" IP from the same range as the VIPs that I am trying to assign. However that would be a waste and maybe a security concern.
I've managed to configure kube-vip to setup the public IP correctly on correct interface, however I am having a problem with default gateway. In default state (without public IP assigned) the VM has a default gateway 172.16.30.1. When the IP gets assigned, all public IP packets with responses are routed to this gateway instead of 1.2.3.1. I can solve this by manually setting source routing however it does not work for exposed services as they are forwarded from different cluster node and NATed. I would need to manually setup connection tracking and route marking... And I feel that I would "fight" k3s and flannel with this approach. And I would also need to setup some sort of automation for this....
I am trying to keep the setup as simple as possible and to avoid as many SPOFs as possible however I am not able to find any simple solution. Is my use case that specific?
One idea would be to use VIPs from 172.16.30.x range and do some sort of NAT on the 1.2.3.x router however I would like to avoid messing with the main router as it is routing far more important projects that this small experimental cluster :-)