Multi-Cloud Kubernetes Cluster

Vineet Negi
3 min readApr 25, 2021

--

In this article I’ll guide you step by step how to a cluster contains worker nodes on different different cloud or network to be more precise. I have used AWS , Azure to Setup this cluster.

Let’s Start…

Configure Master Node

I’ll be using one of the AWS instance for this setup. And using AWS AMI ( image) for launching my instance ( virtual machine ) on cloud.

1. Install docker 2. Start docker service 3. Enable docker serive

yum install docker -y
systemctl start docker
systemctl enable docker

2. Adding repository for kubernetes

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

3. Install kubelet, kubectl, kubeadm and start ,enable kubelet service

yum install -y kubelet kubeadm kubectl systemctl enable kubelet --now

4. Master node or Control panel requires some pods which is used to setup entire master node. Now instead of searching and pulling images one by one kubernetes provides command that directly pull the required images.

kubeadm config images pull

5. Kubernetes supports container engine cgroup as systemd where as by-default (means we can change it) docker comes up with cgroupfs as it’s cgroup. Changing the drivers. And then restart the docker serivce

cat > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker

docker info | grep -i cgroup -: use this command to check your driver it’s changed from cgroupfs to systemd

6. Install package called iproute-tc for manage the traffic

yum install iproute-tccat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOFsysctl --system

7. Using kubeadm to initialize your master node or controller node setup

kubeadm init --control-plane-endpoint "PUBLIC_IP:PORT" --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=MemNote the Bold part in the command it's used to connect cluster with outside network connection with the help of master_public_ip and PortNo:6443

8. To use kubectl commands we need to do this setup

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

9. Setting up overlay network using flannel

kubectl apply  -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

10. The token u getting after running the kubeadm init copy that token or it lost the token run the following command

kubeadm token create --print-join-command

WE’LL BE USING THIS TOKEN TO CONNECT WITH CONTROLLER OR MASTER NODE

Worker Nodes Setup

1. Note the same setup we have to do which we already did above. FOLLOW THE ABOVE SAME STEPS 6.

2. After done with the above setup the token you got on STEP 10 copy and paste on all the worker nodes.

3. Run this command on master or controller node

kubectl get nodes
WORKER NODES -> AWS
WORKER NODES -> AZURE (myos1)
MASTER NODES

COMPLETED….

--

--

Vineet Negi
Vineet Negi

Written by Vineet Negi

★ Aspiring DevOps Engineer ★ Technical Volunteer @LinuxWorld ★ Technical Content Writer @Medium ★ ARTH Learner

No responses yet