I'm running a simple bare-metal multi-master "high availability" enviroment with 2 masters and 2 workers, as well as another VM with HAProxy serving as external Load Balancer.
My question is: it is possible to access the services (dashboard, ngnix, mysql (especially mysql), etc...) from outside of the cluster, exposing them to the network with this setup that i'm running?
I've tried using MetalLB in this enviroment to expose the services as LoadBalancer but it didn't seems to work, and since i'm kinda new to Kubernetes, i couldn't figure out why.
Edit: Got it working now, followed @c4f4t0r suggestion, instead of a external HAProxy Load Balancer, that same VM became a third master node and, as well as the other ones, they now run each one a internal instance of HAProxy and Keepalived, while the VM that used to be the external LB is now an endpoint master for the other ones to join the cluster, with MetalLB running inside the cluster with nginx ingress controller leading the requests to the service that has been requested.
>>> Below are the steps that i've followed to create the enviroment as well as all the configuration used in the setup.
Set up a Highly Available Kubernetes Cluster using kubeadm
Follow this documentation to set up a highly available Kubernetes cluster using Ubuntu 20.04 LTS.
This documentation guides you in setting up a cluster with two master nodes, one worker node and a load balancer node using HAProxy.
Bare-metal Environment
Role |
FQDN |
IP |
OS |
RAM |
CPU |
Load Balancer |
loadbalancer.example.com |
192.168.44.100 |
Ubuntu 21.04 |
1G |
1 |
Master |
kmaster1.example.com |
10.84.44.51 |
Ubuntu 21.04 |
2G |
2 |
Master |
kmaster2.example.com |
192.168.44.50 |
Ubuntu 21.04 |
2G |
2 |
Worker |
kworker1.example.com |
10.84.44.50 |
Ubuntu 21.04 |
2G |
2 |
Worker |
kworker2.example.com |
192.168.44.51 |
Ubuntu 21.04 |
2G |
2 |
- Password for the root account on all these virtual machines is kubeadmin
- Perform all the commands as root user unless otherwise specified
Pre-requisites
If you want to try this in a virtualized environment on your workstation
- Virtualbox installed
- Host machine has atleast 8 cores
- Host machine has atleast 8G memory
Set up load balancer node
Install Haproxy
apt update && apt install -y haproxy
Configure haproxy
Append the below lines to /etc/haproxy/haproxy.cfg
frontend kubernetes-frontend
bind 192.168.44.100:6443
mode tcp
option tcplog
default_backend kubernetes-backend
backend kubernetes-backend
mode tcp
option tcp-check
balance roundrobin
server kmaster1 10.84.44.51:6443 check fall 3 rise 2
server kmaster2 192.168.44.50:6443 check fall 3 rise 2
Restart haproxy service
systemctl restart haproxy
On all kubernetes nodes (kmaster1, kmaster2, kworker1)
Disable Firewall
ufw disable
Disable swap
swapoff -a; sed -i '/swap/d' /etc/fstab
Update sysctl settings for Kubernetes networking
cat >>/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
Install docker engine
{
apt install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt update && apt install -y docker-ce containerd.io
}
Kubernetes Setup
Add Apt repository
{
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
}
Install Kubernetes components
apt update && apt install -y kubeadm=1.19.2-00 kubelet=1.19.2-00 kubectl=1.19.2-00
On any one of the Kubernetes master node (Eg: kmaster1)
Initialize Kubernetes Cluster
kubeadm init --control-plane-endpoint="192.168.44.100:6443" --upload-certs
Copy the commands to join other master nodes and worker nodes.
Deploy Calico network (i'm using Weave instead of Calico)
kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f https://docs.projectcalico.org/v3.15/manifests/calico.yaml
Join other nodes to the cluster (kmaster2 & kworker1)
Use the respective kubeadm join commands you copied from the output of kubeadm init command on the first master.
IMPORTANT: You also need to pass --apiserver-advertise-address to the join command when you join the other master node.