Install Kubernetes
sudo apt update
sudo apt upgrade -y
sudo apt install -y ca-certificates curl gnupg lsb-release
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --systemcurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
sudo apt update
sudo apt install -y containerd.io runc
cat <<EOF | sudo tee -a /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
EOF
sudo sed -i 's/^disabled_plugins \=/\#disabled_plugins \=/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl status containerdFirst, download the latest version of containerd from GitHub and extract the files to the /usr/local/ directory.
# Set Containerd version
export "containerd_version=1.6.8"
# Download Containerd
wget "https://github.com/containerd/containerd/releases/download/v${containerd_version}/containerd-${containerd_version}-linux-amd64.tar.gz"
# Extract Containerd
sudo tar Czxvf /usr/local "containerd-${containerd_version}-linux-amd64.tar.gz"Download Containerd service
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.serviceConfiguration
sudo mkdir -p /etc/containerd/
containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.tomlInstall Containerd service
sudo mv containerd.service /usr/lib/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable --now containerdView Containerd service status
sudo systemctl status containerdrunC is an open-source container runtime for spawning and running containers on Linux according to the OCI specification.
Download the latest version of runC from GitHub and install it as /usr/local/sbin/runc.
export "runc_version=1.1.4"
wget "https://github.com/opencontainers/runc/releases/download/v${runc_version}/runc.amd64"
sudo install -m 755 runc.amd64 /usr/local/sbin/runcFor the container to run, you need to install CNI plugins. So, download the latest version of CNI plugins from GitHub and place them in the /opt/cni/bin directory.
export "cni_version=1.1.1"
sudo mkdir -p /opt/cni/bin/
sudo wget "https://github.com/containernetworking/plugins/releases/download/v${cni_version}/cni-plugins-linux-amd64-v${cni_version}.tgz"
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v${cni_version}.tgzRestart the containerd service.
sudo systemctl restart containerdnerdctl is a Docker-compliant command-line interface for containerd. It is not part of the core package. So, this has to be installed separately.
Download the latest version of nerdctl from GitHub and extract it to the /usr/local/bin directory.
export "nerdctl_version=0.22.2"
wget "https://github.com/containerd/nerdctl/releases/download/v${nerdctl_version}/nerdctl-${nerdctl_version}-linux-amd64.tar.gz"
sudo tar Cxzvf /usr/local/bin nerdctl-${nerdctl_version}-linux-amd64.tar.gzNow that containerd is installed on both our nodes, we can start our Kubernetes installation.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key addcat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOFsudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectlOn the master node, run:
sudo hostnamectl set-hostname "master-node"
exec bashOn the worker node, run:
sudo hostnamectl set-hostname "w-node1"
exec bashSet the hostnames in the /etc/hosts file of the worker:
sudo cat <> /etc/hosts
160.119.248.60 master-node
160.119.248.162 node1 W-node1
EOFSet up the following firewall rules on the master node
sudo ufw allow 6443/tcp
sudo ufw allow 2379/tcp
sudo ufw allow 2380/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 10251/tcp
sudo ufw allow 10252/tcp
sudo ufw allow 10255/tcp
sudo ufw reloadSet up the following firewall rules on the worker node
sudo ufw allow 10251/tcp
sudo ufw allow 10255/tcp
sudo ufw reloadIt’s required for kubelet to work, run on both nodes
sudo swapoff –asudo systemctl enable kubeletOn the master node, execute the following command to initialise the Kubernetes cluster:
sudo kubeadm initThe process can take a few minutes. The last few lines of your output should look similar to this:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.2.105:6443 --token abcdef.abcdefghijklmnop \
--discovery-token-ca-cert-hash sha256:8dfad80a388f4c93a9d5fb6d0b5b3ceda08305bac044ec8417e9f4f3c473893d
Copy the kubeadm join from the end of the above output. We will be using this command to add worker nodes to our cluster.
If you forgot to copy or misplaced the command, don’t worry; you can get it back by executing this command:
sudo kubeadm token create --print-join-commandAs indicated by the output above, we need to create a directory and claim its ownership to start managing our cluster.
Run the following commands:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configWe will use Flannel to deploy a pod network to our cluster:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlYou should see the following output after running the above command:
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
You should be able to verify that your master node is ready now:
sudo kubectl get nodesOutput:
NAME STATUS ROLES AGE VERSION
master-node Ready control-plane,master 90s v1.23.3
…and that all the pods are up and running:
sudo kubectl get pods --all-namespacesOutput
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-957326482-zdgsd 0/1 Running 0 22m
kube-system coredns-957326482-srfgh 0/1 Running 0 22m
kube-system etcd-master-node 1/1 Running 0 22m
kube-system kube-apiserver-master-node 1/1 Running 0 22m
kube-system kube-controller-manager-master-node 1/1 Running 0 22m
kube-system kube-flannel-ds-dnjsd 0/1 Running 0 22m
kube-system kube-flannel-ds-dfjyf 0/1 Running 0 22m
kube-system kube-proxy-jfbur 1/1 Running 0 22m
kube-system kube-proxy-sdfeh 1/1 Running 0 20m
kube-system kube-scheduler-master-node 1/1 Running 0 22m
At this point, we are ready to add nodes to our cluster.
Copy your own kubeadm join command from Step: Initialise cluster and run it on the worker node:
kubeadm join 192.168.2.105:6443 --token abcdef.abcdefghijklmnop \
--discovery-token-ca-cert-hash sha256:8dfad80a388f4c93a9d5fb6d0b5b3ceda08305bac044ec8417e9f4f3c473893d
Output:
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run kubectl get nodes on the control-plane to see this node join the cluster.
To verify that the worker node indeed got added to the cluster, execute the following command on the master node:
kubectl get nodesOutput:
NAME STATUS ROLES AGE VERSION
master-node Ready control-plane,master 2m54s v1.25.0
w-node1 Ready 27s v1.25.0
You can set the role for your worker node using:
sudo kubectl label node w-node1 node-role.kubernetes.io/worker=worker
Get nodes again to verify:
kubectl get nodesOutput:
NAME STATUS ROLES AGE VERSION
master-node Ready control-plane,master 4m34s v1.25.0
w-node1 Ready worker 1m24s v1.25.0
To add more nodes, repeat this Add nodes step on more machines.
That’s it! Your two-node Kubernetes cluster is up and running!
https://gist.github.com/AliKhadivi/d7067f886985dc5f4820387edeffb7bd