What is Kubernetes

W

Kubernetes is an open source tool for managing and orchestrating docker containers in a cluster of servers. It was originally designed by Google and it is now a project run by the Cloud Native Computing Foundation. Its purpose is to provide a “platform for automating the deployment, scaling and operating the application containers in computer groups’. It works with a wide range of container handling tools, including Docker.

Kubernetes makes a separation between the servers on which the Linux distribution is installed and the applications running on these servers. Kubernetes was founded by a team of Joe Beda, Brendan Burns and Craig MCLUCKIE, to which they quickly joined other Google engineers, including Brian Grant and Tim Hockin, and was first introduced by Google in the middle of 2014. Kubernetes defines a set of building blocks or specific resources that together provide a mechanism that implements and maintains applications.

Kubernetes is considered extensible to meet different workloads. This extensibility is largely provided by the Kubernetes API, which is used by internal components as well as by the Kubernetes extensions and containers. Setting up a Kubernetes cluster involves the existence of a master and several nodes, also known as worker nodes. From the master node, the cluster and its nodes are managed using the kubeadm and kubectl commands.

We have built 3 virtual servers with CentOS distribution. A server will be master, and the other two will be worker nodes. The following components will be installed on the master node:
1. API Server – It provides the kubernetes API using json/yaml files.
2. Scheduler – it performs task scheduling, such as launching containers on worker nodes, based on the availability of the resources.
3. Controller manager – its main task is to monitor and adjust the state of the system, from the current one to the desired one, by launching bridges.
4. etcd – is a key-value database; it stores the cluster configuration and its status.
5. Kubectl utility – is a command-line utility with which we can connect to the API server on port 6443; it is used by the system administrator to create bridges, services, etc.

The following components will be installed on the worker nodes:
1. cubelet – is an agent that runs on each node worker; it connects to the docker and takes care of creating, launching and deleting containers.
2. kube-proxy – it routes the traffic to the appropriate container based on the IP address and port number of the entry request.
3. pod – is defined as a group of containers that is launched on a single node worker. To similarly manage similar resources, Kubernets defines the basic concept called pod. Imagine a pod of several similarly processed beans. A pod has one or more containers that are located on the master server and can share resources.

But there are users who want to learn or test Kubernetes and do not have enough resources, or simply do not want to bother too much with the K8s infrastructure but they only want to test applications or some features. For them there is Minikube, a tool that makes it extremely simple to use a Kubernetes cluster locally. Minikube runs a Kubernetes cluster with a single node inside a virtual machine on the personal computer. But for this, it is mandatory for a personal computer to have a KVM hypervisor or a VirtualBox.

The biggest advantage in using Kubernetes is the automatic installation of script-based microservices.
Kubernetes is the most popular system when it comes to scalable production environments that have to adapt to different requirements in terms of load and traffic resources with a high degree of confidence.

About the author

Ilias spiros
By Ilias spiros

Recent Posts

Archives

Categories