Press "Enter" to skip to content

Create a Kubernetes Multi-Node Cluster with Kind

Did you know you could create a kubernetes multinode cluster with Kind without much bother?

The power of clustering goes a long way to enforce technical knowledge and hands-on application of technologies, it is essential for any serious engineer to build full scale labs covering best known architectures. Learning Kubernetes is best when you are able to build a production grade cluster replica. Minikube has always been helpful but the true benefit of a real world architecture does not come with Minikubing. This is where KIND comes in, bringing the power of real hands on without dedicating much hardware and resources as you would in using virtual machines to create a Kubernetes cluster.

Install Prerequisites

apt-get install curl

Install Docker


sudo apt-get update

	
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

Give user permissions to query Docker

sudo usermod -aG docker $USER
Restart your hosts (Nodes) for the permissions to take effect.

Install Kind on Linux

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
mv ./kind /usr/local/bin

Install Kubectl

We’ll need kubectl to work with Kubernetes cluster, in case its not already installed. For this, we can use below commands:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

Set Permissions

chmod +x ./kubectl

Move kubectl to local

sudo mv ./kubectl /usr/local/bin/kubectl

Create Multi-Node Clusters – 1 Master and 2 Worker Nodes

Create Cluster Manifest File – cluster-config-yaml.yaml


# A sample multi-node cluster config file
# A three node (two workers, one controller) cluster config
# To add more worker nodes, add another role: worker to the list
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: 
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"    
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
- role: worker
- role: worker


root@cluster-vm:/home/cluster# kind create cluster --name=azvmms-node --config=single-cluster.yaml
Creating cluster "azvmms-node" ...
βœ“ Ensuring node image (kindest/node:v1.21.1) πŸ–Ό
βœ“ Preparing nodes πŸ“¦ πŸ“¦
βœ“ Writing configuration πŸ“œ
βœ“ Starting control-plane πŸ•ΉοΈ
βœ“ Installing CNI πŸ”Œ
βœ“ Installing StorageClass πŸ’Ύ
βœ“ Joining worker nodes 🚜
Set kubectl context to "kind-azvmms-node"
You can now use your cluster with:

kubectl cluster-info --context kind-azvmms-node

Creates the Control Plane

- role: control-plane

Creates the 2 Worker Nodes

- role: worker
- role: worker

Create Cluster

kind create cluster --config=<cluster-config-yaml.yaml>

Check Pods

$ kubectl get pods -ns -A -o wide
NAMESPACE            NAME                                            READY   STATUS    RESTARTS   AGE   IP           NODE                    NOMINATED NODE   READINESS GATES
kube-system          coredns-558bd4d5db-2gszr                        1/1     Running   0          91m   10.244.0.3   spacers-control-plane              
kube-system          coredns-558bd4d5db-46rkp                        1/1     Running   0          91m   10.244.0.2   spacers-control-plane              
kube-system          etcd-spacers-control-plane                      1/1     Running   0          92m   172.18.0.4   spacers-control-plane              
kube-system          kindnet-9jmwv                                   1/1     Running   0          91m   172.18.0.2   spacers-worker2                    
kube-system          kindnet-c2jrx                                   1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
kube-system          kindnet-hlhmx                                   1/1     Running   0          91m   172.18.0.3   spacers-worker                     
kube-system          kube-apiserver-spacers-control-plane            1/1     Running   0          92m   172.18.0.4   spacers-control-plane              
kube-system          kube-controller-manager-spacers-control-plane   1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
kube-system          kube-proxy-97q94                                1/1     Running   0          91m   172.18.0.3   spacers-worker                     
kube-system          kube-proxy-t4ltb                                1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
kube-system          kube-proxy-xrd5l                                1/1     Running   0          91m   172.18.0.2   spacers-worker2                    
kube-system          kube-scheduler-spacers-control-plane            1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
local-path-storage   local-path-provisioner-547f784dff-5dgp6         1/1     Running   0          91m   10.244.0.4   spacers-control-plane              

Deploy a Sample App

kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml

Test Access

https://localhost:30779/
kubectl run my-nginx --image=nginx --replicas=2 --port=8080

Delete Cluster
kind delete clusters <cluster-name>