How to create k8s cluster with multiple master nodes?

How to create k8s cluster with multiple master nodes?

2 Replies

Typically, the motivation for creating a multi-master Kubernetes cluster is to create a cluster with high availability. This cannot currently be done via the Linode Kubernetes Engine, but it is certainly possible to set it up via other means.

You'll want to use the kubectl command-line tool and set some additional flags on the command kube-up in order to create a high-availability multi-master cluster. This resource from kubernetes.io is especially helpful regarding how to set this up, as it includes the flags you'll need to use as well as potential risks to avoid.

Linode Staff

You can also achieve this using a few Linodes, a NodeBalancer and the kubeadm tool to set everything up.

I started off with 4 Linodes labeled master-1, master-2, worker-1, and worker-2 respectively. You’ll also want to deploy a NodeBalancer that accepts TCP connections over port 6443 and forwards that traffic to the two master nodes over that same port.

Prior to setting up the cluster there are a few things that you’ll want to do on each node in the cluster.

You’ll want to load the br_netfilter kernel module and run the following commands to ensure that iptables are able to see bridged traffic:

All Nodes:

sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

Once that’s complete you should take time to examine the rules in Linodes’ firewalls to ensure that all of the ports used by the control-plane and worker nodes aren’t being restricted.

If you haven’t done so at this point, you’ll want to assign a unique hostname to each Linode and disable the swap disk for all of them.

All Nodes:

sudo hostnamectl set-hostname $HOSTNAME
sudo swapoff -a
sudo sed -i '/swap/d' /etc/fstab

Next, you’ll need to install a container runtime onto each of the nodes in your cluster. You’ll want to refer to the documentation site for specific instructions on how to install the runtime that you choose. For my nodes running Debian 10, I just went with Docker.

After installing the container runtime, I went through and installed kubeadm, kubectl, and kubelet. If you’re deploying a specific version of Kubernetes then you’ll want to get that version of kubeadm, kubectl, and kubelet.

All Nodes:

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install kubeadm kubectl kubelet
sudo apt-mark hold kubeadm kubectl kubelet

Once the packages finish installing, you can initialize your kubernetes cluster by running the kubeadm init command on one of the master nodes:

master-1:

kubeadm init --upload-certs --control-plane-endpoint string=”$IP_ADDRESS_OF_NODEBALANCER:6443” --apiserver-advertise-address=”$PRIVATE_IP_AADRRESS_OF_MASTER_1” --pod-network-cidr=<SEE BELOW>

You’ll want to keep in mind the following information from the Kubernetes documentation when choosing your pod network CIDR

Take care that your Pod network must not overlap with any of the host networks: you are likely to see problems if there is any overlap. (If you find a collision between your network plugin's preferred Pod network and some of your host networks, you should think of a suitable CIDR block to use instead, then use that during kubeadm init with --pod-network-cidr and as a replacement in your network plugin's YAML)

Once the initialization process completes there should be 2 kubeadm join commands printed to the screen the resemble the following:

FOR CONTROL PLAN NODES:

kubeadm join $NODEBALANCER_IP:6443 --token $TOKEN --discovery-token-ca-cert-hash $HASH --control-plane --certificate-key $CERT_KEY

EDIT: You'll need to add the --apiserver-advertise-address $MASTER_NODE_PRIVATE_IP_ADDRESS to the end of that command in order to get other master nodes to properly join the cluster.

FOR WORKER NODES:

kubeadm join 69.164.223.140:6443 --token $TOKEN --discovery-token-ca-cert-hash $HASH

You’ll want to set these commands aside to run after installing the pod network for your cluster. For now, you can run the following set of commands to grab the KUBECONFIG file for your cluster and place it in the home directory of the user that you're logged in as.

master-1:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now that kubectl is configured to interact with the cluster, you can deploy a pod network to the cluster. You can get a list of pod network addon that can be used for your cluster from this page. For steps regarding deploying the pod network, you’ll want to refer to its documentation.

I used Calico for my cluster and so I just ran the commands provided on their site.

master-1:

kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml

NOTE: That second command will deploy a pod network with a CIDR IP range of 192.168.0.0/16 by default. If this was not the CIDR provided as the --pod-network-cidr argument when running kubeadm init, you’ll want to download the yaml file instead and then modify it with a text editor to specify the correct CIDR.

Once the pod network is up, you join the remaining master and worker nodes to the cluster using the respective kubeadm join commands from before.

master-2:

kubeadm join $NODEBALANCER_IP:6443 --token $TOKEN --discovery-token-ca-cert-hash $HASH --control-plane --certificate-key $CERT_KEY --apiserver-advertise-address $MASTER_2_PRIVATE_IP

worker-1 and worker-2:

kubeadm join 69.164.223.140:6443 --token $TOKEN --discovery-token-ca-cert-hash $HASH

Once the remaining nodes successfully join the cluster, you should be able to run the kubectl get nodes command to see them all:

$ kubectl get nodes
NAME       STATUS   ROLES                  AGE   VERSION
master-1   Ready    control-plane,master   27h   v1.20.5
master-2   Ready    control-plane,master   27h   v1.20.5
worker-1   Ready    <none>                 27h   v1.20.5
worker-2   Ready    <none>                 27h   v1.20.5

All that's left to do is to transfer that config file for kubectl from one of the master nodes to whichever device you'll be managing your cluster from. This can be done using SCP or a File Transfer Client.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct