Create Kubernetes cluster using Kubeadm on Ubuntu 22.04 LTS

In this Article we are going to learn How to Create Kubernetes cluster using Kubeadm on Ubuntu 22.04 LTS and Join Worker Node to the Cluster.

Prerequisites:

  • 2 or 3 Ubuntu 20.04 LTS System with Minimal Installation
  • Minimum 2 or more CPU, 3 GB RAM.
  • Disable SWAP on All node
  • SSH Access with sudo privileges

Firewall Ports/Inbound Traffic Ports for Kubernetes Cluster

Control-plane node(s)

ProtocolDirectionPort RangePurposeUsed By
TCPInbound6443*Kubernetes API serverAll
TCPInbound2379-2380etcd server client APIkube-apiserver, etcd
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound10251kube-schedulerSelf
TCPInbound10252kube-controller-managerSelf

Worker node(s)

ProtocolDirectionPort RangePurposeUsed By
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound30000-32767NodePort Services†All

You can clone the repository for reference.

git clone https://github.com/techiescamp/kubeadm-scripts

Step #1:IPtables to see bridged traffic

Execute the following commands on all the nodes for IPtables to see bridged traffic.

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system

Step #2:Disable swap on all the Nodes

For kubeadm to work properly, you need to disable swap on all the nodes using the following command.

sudo swapoff -a
(crontab -l 2>/dev/null; echo "@reboot /sbin/swapoff -a") | crontab - || true

Step #3:Install CRI-O Runtime On All The Nodes

Create the .conf file to load the modules at bootup

cat <<EOF | sudo tee /etc/modules-load.d/crio.conf
overlay
br_netfilter
EOF
# Set up required sysctl params, these persist across reboots.
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

Execute the following commands to enable overlayFS & VxLan pod communication.

sudo modprobe overlay
sudo modprobe br_netfilter

Set up required sysctl params, these persist across reboots.

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

Reload the parameters.

sudo sysctl --system

Create Kubernetes cluster using Kubeadm on Ubuntu 22.04 LTS

Step #4:Install Kubeadm & Kubelet & Kubectl on all Nodes

Install the required dependencies

Update your system packages:

sudo apt-get update

Install apt-transport-https curl

sudo apt-get install -y apt-transport-https curl

Add gpg keys

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo tee /etc/apt/sources.list.d/kubernetes.list

Add this below lines in this file

deb https://apt.kubernetes.io/ kubernetes-xenial main

Lets install kubelet kubeadm kubectl

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Run the ./common.sh file (kubeadm-scripts/scripts) in this location on both nodes:

./common.sh

Now you need to change master.sh file

sudo nano master.sh
PUBLIC_IP_ACCESS="false"

False replace with true

PUBLIC_IP_ACCESS="true"

Now run the master.sh file

./master.sh

Use the following commands from the output to create the kubeconfig in master so that you can use kubectl to interact with cluster API

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now, verify the kubeconfig by executing the following kubectl command to list all the pods in the kube-system namespace.

kubectl get po -n kube-system

Run the following command:

kubectl get nodes
NAME               STATUS   ROLES           AGE     VERSION
ip-172-31-1-99     Ready    control-plane   3h40m   v1.27.2
ip-172-31-13-187   Ready    <none>          3h37m   v1.27.2

Step #5:Join Worker Node to the Cluster

Next Join two worker nodes to master.

sudo kubeadm join 172.31.6.177:6443 --token vr5rat.seyprj6jvw4xy43m --discovery-token-ca-cert-hash sha256:4c9b53eb03744b4cf21c5bdacd712024eb09030561714cc5545838482c8017b3

Output:

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Check the All node status

sudo kubectl get nodes

Output:

Status:

NAME               STATUS   ROLES    AGE     VERSION

ip-172-31-16-180   Ready    master   3m19s   v1.20.5

ip-172-31-16-86    Ready    worker1   6m15s   v1.20.5

ip-172-31-21-34    Ready    worker2   3m23s   v1.20.5

To Verify Pod namespaces

sudo kubectl get pods --all-namespaces

Output:

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE

kube-system   coredns-6955765f44-7sw4r                  1/1     Running   0          6m46s

kube-system   coredns-6955765f44-nwwx5                  1/1     Running   0          6m46s

kube-system   etcd-ip-172-31-16-86                      1/1     Running   0          6m53s

kube-system   kube-apiserver-ip-172-31-16-86            1/1     Running   0          6m53s

kube-system   kube-controller-manager-ip-172-31-16-86   1/1     Running   0          6m53s

kube-system   kube-proxy-b5vht                          1/1     Running   0          4m5s

kube-system   kube-proxy-cm6r4                          1/1     Running   0          4m1s

kube-system   kube-proxy-jxr9z                          1/1     Running   0          6m45s

kube-system   kube-scheduler-ip-172-31-16-86            1/1     Running   0          6m53s

kube-system   weave-net-99tsd                           2/2     Running   0          93s

kube-system   weave-net-bwshk                           2/2     Running   0          93s

kube-system   weave-net-g8rg8                           2/2     Running   0          93s

We have covered Install Kubernetes cluster on Ubuntu.

Step #6:Deploy Sample Nginx microservice on Kubernetes

Lets create a deployment on master node named “nginx-deploy” using YAML.

sudo nano nginx-deploy.yaml

Deployment YAML file should like below

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-app
  template:
    metadata:
      labels:
        app: nginx-app
    spec:
      containers:
      - name: nginx-container
        image: nginx:latest
        ports:
        - containerPort: 80
kubectl apply -f nginx-deploy.yml

Lets check Pod status

kubectl get pods

Expose the Nginx deployment using kubernetes nodeport (32000) service

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: nginx-app
spec:
  selector: 
    app: nginx-app
  type: NodePort  
  ports:
    - port: 80
      targetPort: 80
      nodePort: 32000
EOF 

Now access the nginx service by using worked node IP and port 32000

Create Kubernetes cluster using Kubeadm on Ubuntu 22.04 LTS 1

Conclusion:

In this article, We have covered How to Install Kubernetes Cluster on Ubuntu 22.04 LTS with kubeadm, Initializing master node, creating pod network,join worker/slave node to master, creating pod using YAML , checking the status of node,pod,namespace and deleting pod.

Troubleshooting:

Error 1:

sudo kubeadm init --pod-network-cidr 10.0.0.0/16
[init] Using Kubernetes version: v1.27.2
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

Error 2:

sudo kubeadm join 3.108.219.16:6443 --token jql43k.gx74vfz5wlevah3u --discovery-token-ca-cert-hash sha256:6cd2cd4e81b9b956b8f724e2a0c3c0986ea6b50eed6a5e62af840fbdef7f0217
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

Solution:

https://github.com/containerd/containerd/issues/4581

https://github.com/containerd/containerd/issues/4581

sudo rm /etc/containerd/config.toml

sudo systemctl restart containerd

kubeadm init

We have covered How to Install Kubernetes Cluster on Ubuntu 22.04 LTS.

Related Articles:

Reference:

Kubernetes install kubeadm official page

Shweta Mamidwar

I am Shweta Mamidwar working as a Intern in Product Company. Likes to share knowledge.

1 thought on “Create Kubernetes cluster using Kubeadm on Ubuntu 22.04 LTS”

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share via
Copy link
Powered by Social Snap