Zero to Hero DevOps!

"Unleash Your DevOps Journey: From Zero to Hero! 🚀
Transform into a DevOps Maestro with our comprehensive course. Dive deep into automation, cloud deployment, CI/CD pipelines, and more! Elevate your skills and conquer the world of DevOps. Join us on the path to excellence! 💻✨ #DevOpsHero #LearnDevOps

Nginx as a LoadBalancer

Nginx as a LoadBalancer.

NGINX (Engine X), is a popular Open-source web server tool/software that can be configured to accelerate content and application delivery, improve security, and facilitate availability and scalability for the busiest websites on the Internet.

It can also be configured as a reverse proxy, load balancer, mail proxy, and HTTP cache. Nginx serves as a single point of entry for a distributed web application that runs on several different servers. 

Why do we need a Load Balancer?

As a Load Balancer, it easily scales out your application and distributes several servers to increase the application performance. Load-Balancer, efficiently distributing incoming network traffic across a group of backend servers. Load Balancer acts as a traffic router that sits in front of your servers and routes client requests across all servers capable of fulfilling those requests in a way that maximizes speed and capacity usage and ensures that no server is overworked, which could reduce performance. If one of the backend servers goes down then the load-balancer redirects the client requests to the remaining online server whenever a new server is added to the server group load-balancer automatically sends the request to it.

Nginx as a LoadBalancer

The functions of a load balancer are as follows:

  • Effectively distributes requests from clients or networks across multiple servers.
  • Ensures high availability and reliability by sending requests only to online servers.
  • It provides the flexibility to add or remove servers as needed.

How to Install Nginx?

To install Nginx on CentOS/RedHat Linux

  1. Install the EPEL repository:
    sudo yum install epel-release
  2. Update the repository:
     sudo yum update
  3. Install NGINX Open Source:
    sudo yum install nginx
  4. Verify the installation:
     sudo nginx -v

To install Nginx on Ubuntu Linux:

  1. Update the Ubuntu repository information:
    sudo apt-get update
  2. Install the package:
    sudo apt-get install nginx
  3. Verify the installation:
     sudo nginx -v

 

Configuring Nginx Server Block

Note: “VirtualHost” is an Apache term. NGINX does not have Virtual hosts, it has “Server Blocks” that use the server_name and listen to directives to bind to TCP sockets.

Note: make a directory for the document root in the root content 

mkdir -p /var/www/devcloudgeek.com 

 

Server Blocks, Serving Static Files

server {
   server_name www.devcloudgeek.com;
   access_log /var/log/nginx/devcloudgeek.access.log main;
   root /var/www/devcloudgeek.com;
 }

 

A Default “Catch-All” Server Block

  server {
    listen 80 default_server;
    server_name _; # This is just an invalid value that will never trigger on a real hostname.
    access_log logs/default.access.log main;
    server_name_in_redirect off;
    root /var/www/default/htdocs;
  }

Future Reference

Wildcard Subdomains in a Parent Folder

This is just a really easy way to keep adding new subdomains or to add new domains automatically when DNS records are pointed at the server. Note that I have included FCGI here as well. If you want to just serve static files, strip out the FCGI config and change the default document to index.html. Rather than creating a new vhost.conf file for every domain, just create one of these:

server {
  # Replace this port with the right one for your requirements
  listen 80 default_server;  #could also be 1.2.3.4:80
  # Multiple hostnames separated by spaces.  Replace these as well.
  server_name star.yourdomain.com *.yourdomain.com; # Alternately: _
  root /PATH/TO/WEBROOT;
  error_page 404 errors/404.html;
  access_log logs/star.yourdomain.com.access.log;
  index index.php index.html index.htm;
  # static file 404's aren't logged and expires header is set to maximum age
  location ~* \.(jpg|jpeg|gif|css|png|js|ico|html)$ {
    access_log off;
    expires max;
 }
  location ~ \.php$ {
    include fastcgi_params;
    fastcgi_intercept_errors on;
    # By all means use a different server for the fcgi processes if you need to
    fastcgi_pass   127.0.0.1:YOURFCGIPORTHERE;
 }
  location ~ /\.ht {
    deny all;
 }
}

Configure Nginx as a Load-Balancer:

Once nginx is properly installed and its service is running then we can configure Nginx as a Load Balancer. We need to set up a Nginx as it redirects the client request to the group of servers. it depends on which Load-Balancing method(algorithm) we used in the configuration. 

Create a load balancer configuration file inside /etc/nginx/conf.d/ directory

sudo nano /etc/nginx/conf.d/load-balancer.conf

first you need to specify the group with the upstream directive. The directive is placed in the HTTP context. 

In the load-balancer.conf, we need to define the two segments, upstream and server.

upstream myapp1 {
       server srv1.devcloudgeek.com;
       server srv2.devcloudgeek.com;
       server srv3.devcloudgeek.com;
   }
   server {
       listen 80;
server_name www.lbserver.com lbserver.com;
       location / {
           proxy_pass http://myapp1;
       }
 }

After saving that file disable default.conf or remove it and start the Nginx service.

sudo systemctl restart nginx

Load Balancer Methods:

  1. Round Robin

Nginx uses a round-robin algorithm by default if no algorithm is defined. each server gets the request in the order Like a round-robin scheme this balances the number of requests equally.

upstream myapp1 {
      server srv1.devcloudgeek.com;
      server srv2.devcloudgeek.com;
      server srv3.devcloudgeek.com;
   }
   server {
       listen 80;
server_name www.lbserver.com lbserver.com;
       location / {
           proxy_pass http://myapp1;
       }
 }
  1. Least Connection:

This method directs requests at that time to the server with the least active connections. This functions fairer than round-robin would with applications where it may take longer to complete requests at times.

To allow the least balancing method of connections, add the least conn parameter as shown in the example below to your upstream portion.

upstream myapp1 {
least-conn;
      server srv1.devcloudgeek.com;
      server srv2.devcloudgeek.com;
      server srv3.devcloudgeek.com;
   }
   server {
       listen 80;
       server_name www.lbserver.com lbserver.com;
       location / {
           proxy_pass http://myapp1;
       }
 }
  1. IP HASH 

The system ensures that requests from the same address are handled on the same server unless they are not available. IP hashing uses the IP address of the visitor as a key to decide which host to support the request should be chosen. This allows visitors to be directed to the same server every time, provided the server is available and the IP address of the visitor has not changed.

To use this method, add the IP hash -parameter as in the example below to your upstream segment.

This time frame is defined by the fail timeout variable, which also determines how long the server should be assumed to have failed. The failed timeout is set to 10 seconds by default.

upstream myapp1 {
ip_hash;
      server srv1.devcloudgeek.com;
      server srv2.devcloudgeek.com;
      server srv3.devcloudgeek.com;
   }
   server {
       listen 80;
       server_name www.lbserver.com lbserver.com;
       location / {
           proxy_pass http://myapp1;
       }
 }

Important Notes

Make an entry in /etc/hosts file for a route domain name

192.168.10.40 www.srv1.devcloudgeek.com; srv1.devcloudgeek.com;
192.168.10.41 www.srv1.devcloudgeek.com; srv1.devcloudgeek.com;
192.168.10.40 www.lbserver.com lbserver.com

Restart nginx services and also open the firewall for the webserver

  sudo systemctl enable nginx
 sudo systemctl start nginx
 sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload

Conclusion:

In conclusion, Nginx stands out as a powerful and versatile solution, not only as an open-source web server but also as a robust load balancer. With its ability to efficiently distribute incoming network traffic across a group of backend servers, Nginx plays a crucial role in enhancing application performance, ensuring high availability, and providing scalability for dynamic web applications.

The installation process, as outlined for both CentOS/RedHat and Ubuntu Linux, is straightforward, allowing users to quickly set up Nginx and begin reaping the benefits of its capabilities. Configuring Nginx as a load balancer involves defining server blocks, specifying virtual hosts, and utilizing various load-balancing methods to optimize traffic distribution.

Nginx’s load balancing methods, such as Round Robin, Least Connection, and IP Hash, offer flexibility and adaptability to different application requirements. These methods ensure that requests are intelligently distributed among backend servers, optimizing response times and preventing server overloads.

In the provided guide, we also touched upon wildcard subdomains, server block configurations, and future considerations for easy scalability. The detailed steps for configuring Nginx as a load balancer, including creating a configuration file and specifying upstream servers, enable users to tailor their setup to specific needs.

By incorporating Nginx into your infrastructure, you not only enhance the reliability and availability of your web applications but also gain a powerful tool for managing and optimizing traffic distribution. As technology evolves, Nginx continues to prove itself as an essential component for achieving high-performance web architectures.

 

1 thought on “Nginx as a LoadBalancer”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top