Kubernetes is an open source tool for managing and orchestrating docker containers in a cluster of servers. Kubernetes (abbreviated k8s) was developed by Google and donated to the Cloud Native Computing Foundation. Kubernetes makes a separation between the servers on which the Linux distribution is installed and the applications running on these servers.
Setting up a Kubernetes cluster involves the existence of a master and several nodes, also known as worker nodes. From the master node, the cluster and its nodes are managed using the kubeadm and kubectl commands.
In the article, I’m going to install the latest version of Kubernetes 1.14.0 on CentOS 7 using the kubeadm utility on a KVM (Red Hat) virtualization system. We built 3 virtual machines with CentOS 7 minimal installation. A server will be master, and the other two will be worker nodes.
The following components will be installed on the master node:
• API Server – Provides the kubernetes API using json/yaml files;
• scheduler – performs scheduling tasks, such as launching containers on worker nodes, depending on the availability of the resources;
• controller manager – its main task is to monitor and adjust the state of the system, from the current one to the desired by launching pods;
• etcd – is a key-value database; it stores the cluster configuration and its status;
• kubectl utility – is a command-line utility with which we can connect to the API server on port 6443; it is used by the system administrator to create pods, services, etc.
The following components will be installed on the worker nodes:
• kubelet – is an agent that runs on each node worker, connects to the docker and is in charge of creating, launching, deleting containers;
• kube–proxy – routes traffic to the right container based on the IP address and port number of the input request;
• pod – the pod is defined as a group of containers which is launched on a single node worker.
Infrastructure preparation
As I said, we need 3 virtual machines with minimal CentOS 7 installed on them. I have to say that each of the three servers must have a static IP; for this, in the moment of the installation, we will go to Network & Hostname -> Configure from the installation panel; In the IPv4 Settings tab, select Manual, click Add and enter the desired IP in the class provided by the virbr0 virtual network of KVM (192.168.122.0/24). Complete the Gateway and DNS Server values. Here you can also set the hostname of the machine (if not set here, can be set later with the hostnamectl command).
Let’s suppose we have set up the internal IPs as follows:
192.168.122.200 – master
192.168.122.201 – worker1
192.168.122.202 – worker2
After launching the 3 virtual machines, we follow the steps below for MASTER and WORKER nodes. For ease of use (we can easily copy with copy/paste commands), I recommend connecting through SSH to each of the three machines.
Steps to be done on the MASTER node
1. Disable swap and SELinux
Once we connect to the chosen instance as a master, we must disable the swap and SELinux:
# swapoff -a
We comment on the swap line in the /etc/fstab file so that it will not be mounted at the next reboot:
# vi /etc/fstab
To begin with, set SELinux as permissive, and then edit the /etc/selinux/config file:
# setenforce 0
# vi /etc/selinux/config
We change the corresponding value from enforced to disabled:
SELINUX=Disabled
We set the hostname of the machine and check:
# hostnamectl set-hostname kube-master
# bash
# cat /etc/hostname
2. Firewall
We set the firewall rules and load the br_netfilter module into the kernel:
# firewall-cmd –permanent –add-port=6443/tcp
# firewall-cmd –permanent –add-port=2379-2380/tcp
# firewall-cmd –permanent –add-port=10250/tcp
# firewall-cmd –permanent –add-port=10251/tcp
# firewall-cmd –permanent –add-port=10252/tcp
# firewall-cmd –permanent –add-port=10255/tcp
# firewall-cmd –reload
# modprobe br_netfilter
# echo ‘1’ > /proc/sys/net/bridge/bridge-nf-call-iptables
We will edit the /etc/hosts file where we will add the 3 lines below (we will edit this file for both master and individual nodes):
# vi /etc/hosts
192.168.122.200 kube-master
192.168.122.201 kube-worker1
192.168.122.202 kube-worker2
3. Set up the Kubernetes repository
The required packages to install k8s are not available in CentOS 7 or RHEL repositories, so we need to manually add this repo to each machine in the cluster:
# vi /etc/yum.repos.d/kubernetes.repo
[Kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
4. Install Kubeadm and Docker
Once we have set up the K8s deposit, we run the following command to install kubeadm and docker:
# yum install kubeadm docker -y
We launch the kubelet and docker services, then activate them to start after reboot:
# systemctl restart docker && systemctl enable docker
# systemctl restart kubelet && systemctl enable kubelet
5. Initialize the Master K8s server
We run the following order:
# kubeadm init
We will run the following commands as the usual user (enter the exit command to exit the root user):
[root @k8s-master ~]# exit
[mvps@k8s-master ~]$ mkdir -p $HOME/ .kube
[mvps@k8s-master ~]$ cp -i /etc/kubernetes/admin.conf $HOME/ .kube/config
[mvps@k8s-master ~]$ chown $(id -u):$(id -g) $HOME/ .kube/config
Copy and save in a file the line that will make the join of future worker nodes in the cluster; example:
kubeadm join –token code 192.168.1.100:6443 –discovery-token-ca-cert-hash sha256: token token token
6. Launching the network in the cluster
To view the current status of the cluster:
$ kubectl get nodes
$ kubectl get pods –all-namespaces
As we can see, the cluster is currently being formed only from MASTER but not yet in the Ready status. For this, we need to launch the pod for network so that all the containers and pods, no matter on which node they are, can be seen and able to communicate with each other. Kubernetes can use several add-ons for this (flannel, weave, calico, romana…). We will install Weave.
The following commands run as a regular user:
$ export kubever=$(kubectl version | base64 | tr -d ‘\ n’)
$ kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$kubever”
serviceaccount “weave-net” created
clusterrole “weave-net” created
clusterrolebinding “weave-net” created
daemonset “weave-net” created
$
Again, we run the following checks for status check:
$ kubectl get nodes
$ kubectl get pods –all-namespaces
The MASTER node has been configured; now it’s time to set the two worker nodes.
Steps to be performed on each WORKER node.
1. Disable swap and SELinux
For each worker, follow the steps described above for MASTER.
Set the hostname of each machine (if you did not set them when installing CentOS):
# hostnamectl set-hostname kube-worker1
# hostnamectl set-hostname kube-worker2
2. Firewall
For each worker, the following firewall rules are being configured, and then loading the br_netfiler module:
# firewall-cmd –permanent –add-port=10250/tcp
# firewall-cmd –permanent –add-port=10255/tcp
# firewall-cmd –permanent –add-port=30000-32767/tcp
# firewall-cmd –permanent –add-port=6783/tcp
# firewall-cmd –reload
# modprobe br_netfilter
# echo ‘1’ > /proc/sys/net/bridge/bridge-nf-call-iptables
3. Configure the Kubernetes repository
On each node is created the repo file for K8S:
# vi /etc/yum.repos.d/kubernetes.repo
The following will be written inside the new file:
[Kubernetes]
name=Kubernetes
baseurl=https: //packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https //packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
4. Install kubeadm and docker
On both worker machines the following commands are running:
# yum install kubeadm docker -y
We launch the kubelet and docker services, then activate them to start after the reboot:
# systemctl restart docker && systemctl enable docker
# systemctl restart kubelet && systemctl enable kubelet
5. Enter the worker nodes into the cluster
On each clustered server we have to run the command copied above, after running kubeadm init on MASTER:
$ kubectl get nodes
$ kubectl get pods –all-namespaces
# kubeadm join –token token 192.168.1.100:6443 –discovery-token-ca-cert-hash sha256: token
We will receive a message similar to the one below confirming that the join was successful:
Node join complete
Run ‘kubectl get nodes‘ on the master to see this machine join
Verification after Cluster Completion
That’s about that. All we have to do is check the status of the new nodes entered into the cluster. To do this, we will run the commands below on MASTER:
$ kubectl get nodes
$ kubectl get pods -all-namespaces
We have successfully installed Kubernetes 1.14.0 and have joined 2 workers to the cluster; all the work did not last for more than 1 hour. You can add as many worker nodes as you want – the principle is the same.