What is a Load Balancer

W

Dividing the server load means distributing certain processes and communications across multiple servers so none of them gets overloaded. In general, websites with a lot of activity use multiple servers containing the same information. Visitors are distributed on all servers to avoid overloading one of them.

Load balancing is a networking method for distributing a workflow or network load of a server, a link (website), hard drive, or resources from a workstation (share, drive, etc.).
The LB wants to optimize the resources used, maximizing access and minimizing the response time between the client and the server (the accessed resource), avoiding the overloading of network equipment. Using LB instead of a single piece of equipment or component can increase reliability by redundancy.

There are two types of LOAD BALANCER-ers: hardware (switches or DNS servers) and software (ZEN). The method we are presenting here is much more accessible, as it is a free solution for any system administrator if an equipment is overworked on the workflow or data stream on the network.

The LB divides traffic between two or more network interfaces (NICs), based on a Layer 4 or Layer 3 pattern.

Load Balancers can be useful solutions when we want a server to be “always on”, even if it is accessed externally by several thousands of users (clients), such as ATM, a bank’s server, online store websites, and so on. For Internet services, usually the best method is software, because this way the customers connect directly to a (defined) port to access the services. In this case, the LB takes over the client’s request and sends it to a backend server that responds back to the LB that sends the requested information to the client. This allows LB to hide the IP interface of the server that holds the information (backend srv), without the end-user knowing the real source of the information. It is also a solution to restrict a client’s access to a directly connected interface to the backend server, which can prevent possible attacks, increasing its security.

Many Load balancing software solutions provide a backup solution if a LB is down, available through another LOAD BALANCER. It provides a downtime resolution to the system administrator to solve the problem, thus providing “high availability” for the client.

In this tutorial I will introduce several load balancing solutions using open source applications.

1. Use Nginx as load balancer

Despite the fact that it is not designed for this, Nginx can be used for load balancing. For example, you can set up a server to use it as a load balancer, and an unlimited number of web servers can be placed behind it (we can call them “nodes”).

2. Load balancing with HAProxy

The combination of FreeBSD + HAProxy can successfully replace a commercial load balancer that comes at very high costs.

3. Load balancing with packet filter on FreeBSD

The packet filter withstands by default the definition of a web server farm to which they can share their incoming requests. It also supports “round-robin” and “sticky-address“.

Round-Robin is a load balancing technique by which requests are placed/sent to a node farm.

For example, let’s suppose that a company owns a very large website and wants to host it on three servers.

A load balancer will be set up in front of these servers, which will distribute the incoming requests on port 80 to the three servers. In the case of “round-robin”, when a user accesses the website,

his requests will be sent to the first server, the second user will be sent to the second server and the third visitor who will access the website will be sent to the third server.

The rest of the user requests will be sent in the same way to the nodes, until the list (queue) ends.

Sticky-Address is a process where requests from the same source will be sent to the same destination (same-path).

This procedure persists as long as there are “states” in the queue.

Specifically: Connections from a SINGLE SOURCE will be sent to the same web server (nodes).

4. Load balancing with Apache

Using Apache, load balancing can be done with the mod_proxy and mod_proxy_balancer modules. Balancing algorithms are: Request Counting, Weighted Traffic Counting, and PendingĀ Request Counting. These balancing algorithms can be controlled by lbmethod.

About the author

Ilias spiros
By Ilias spiros

Recent Posts

Archives

Categories