I have setup my cluster with tls-san configured with my kube-vip IP (..*.59) and have deployed the daemon set. The daemon set has no errors and the kube-vip pod is running.
If I ping my VIP IP, I get a response but if I try to access the cluster using the kube-vip IP, I don't get a response and the connection just times out. Also curl -k https://*.*.*.59:6443
also times out.
If I change the IP address in kube config to the IP Address of the VM running k3s (..*.60) kubectl commands work again.
To Reproduce
The command I used to setup k3s
curl -sfL https://get.k3s.io | K3S_TOKEN=123abc INSTALL_K3S_EXEC="server --cluster-init" sh -s - server --tls-san *.*.*.59 --disable servicelb --disable-cloud-controller
Daemon set is running
[root@sdeas0072v deployments]# kubectl get ds -A
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system kube-vip-ds 1 1 1 1 1 <none> 11m
Pod is running
kube-system kube-vip-ds-vc7pz 1/1 Running 0 13m
Expected behavior
I should be able to change the IP address in ~/.kube/confg
to my VIP IP address and successfully run kubectl get pods -A
Environment (please complete the following information):
- OS/Distro: RedHat 8
- Kubernetes Version:
Kubernetes control plane is running at https://*.*.*.60:6443
CoreDNS is running at https://*.*.*.60:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://*.*.*.60:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
Kube-vip.yaml
:
apiVersion: apps/v1
kind: DaemonSet
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/name: kube-vip-ds
app.kubernetes.io/version: v0.5.5
name: kube-vip-ds
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: kube-vip-ds
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/name: kube-vip-ds
app.kubernetes.io/version: v0.5.5
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "6443"
- name: vip_interface
value: ens192
- name: vip_cidr
value: "32"
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_ddns
value: "false"
- name: svc_enable
value: "true"
- name: vip_leaderelection
value: "true"
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: *.*.*.59
- name: prometheus_server
value: :2112
image: ghcr.io/kube-vip/kube-vip:v0.5.5
imagePullPolicy: Always
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
hostNetwork: true
serviceAccountName: kube-vip
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
updateStrategy: {}
status:
currentNumberScheduled: 0
desiredNumberScheduled: 0
numberMisscheduled: 0
numberReady: 0
Additional context
I am behind a corporate proxy server. http_proxy, https_proxy and no_proxy are set on my kubectl machine and in the systemd service for k3s