I have an AKS cluster and one of the pods, call it "my-service", needs to connect to an on-premises service via VPN and that service requires whitelisting of IPs. Unfortunately, it can only whitelist individual address and not ranges. The connectivity between AKS (Azure) and the on-prem service over the VPN is running and verified working. The AKS pod can communicate with the on-prem service.
The difficulty we are running into is the on-prem service is seeing the node IP address as the source IP instead of the private IP of a load balancer we created.
I have created an internal load balancer for my pod to handle the egress but the on-prem service continues to report the node IP instead of the LB IP.
We are running Kubenet for networking and per MSFT docs, seeing the node Id is the normal behavior. Since we cant whitelist the entire AKS subnet and we pay licensing per whitelist IP address, we need to have a 'static' IP source. Seeing the node ID wont work since we cant guarantee the pod to run on the node and we dont want to pin to a single node. Having a 'static' internal IP is what we where trying to do with the load balancer.
Is there a way to setup AKS or my VNet - VPN - On prem networking so that the on-premise service will see the load balancer or other 'thing' as the source IP?
I've followed quite a bit of the MSFT documentation on this subject here:
Internal LB
AKS Egress
For reference, here's the Load Balancer manifest. Pretty vanilla.
apiVersion: v1
kind: Service
metadata:
name: my-service-egress
namespace: my-internal-service-namespace
labels:
name: my-service-egress
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: 'true'
spec:
type: LoadBalancer
selector:
name: my-service
ports:
- protocol: TCP
port: 8999
targetPort: 8999