Pros and Cons of Configuring a Load Balancer for Sticky Sessions

Traducciones al Español
Estamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
Create a Linode account to try this guide with a $ credit.
This credit will be applied to any valid services used during your first  days.

What is Load Balancing and Why is it Necessary?

Load balancing is an efficient method of distributing incoming network traffic across multiple servers. Each load balancer lies between client devices and servers. The load balancer receives and distributes incoming requests to the available and healthy server.

Load balancers provide the following benefits:

  • Efficiency: Load balancers distribute client requests across multiple servers preventing server overload.
  • Scalability: You can add new servers to handle an increase in network traffic to your application.
  • Flexibility: Servers can be added or removed based on your application’s needs.
  • Highly Available: Ensures that the user’s requests are spread evenly across multiple servers.

Load Balancing Methods: Stateless and Stateful

You can perform load balancing in two ways—stateless and stateful. If the load balancer does not keep track of any session information, it is stateless load balancing. Consider the example of a static HTML website without a login page. A user would never notice if they were randomly redirected to a different server instance when navigating across the site. The same can be said for a site built on WordPress, as long as the site does not require a user to log in. However, if a site or application needs to maintain continuity for a user from request to request, this requires stateful load balancing.

What are Sticky Sessions?

Most websites maintain continuity of state using a session. When a client makes the first request to the server, a session object is created. This object may be stored in server RAM, a file on the server, passed back to the client in an HTTP cookie, stored in a database, or stored in the client. All subsequent requests use the same session object. When you store the session object in a single server’s RAM or file system, then the only way for the client to continue a session in the next request is for the browser to return to the same server instance.

When using a load balancer, more than one server is responding to requests. So, what happens if the load balancer routes the second request to another server that does not have that session object in memory? Some of the user information might not persist, and can cause data loss. In this scenario, the load balancer should send all requests from a particular user session to be processed on the same server. Doing so is referred to as session stickiness, or session persistence.

Pros and Cons of Sticky Sessions

Pros:

  1. More efficient use of data and memory. Since you are persisting data to one server, you are not required to share the persisted data across your application’s servers. Similarly, data stored in a RAM cache can be looked up once and reused.

  2. Implementing sticky sessions on a load balancer does not require any changes to your application. Your sticky session configurations are limited to the tool that you choose to use to balance your site’s web traffic.

Cons:

  1. Limits your application scalability as the load balancer cannot distribute the load evenly each time it receives a request from a client.

  2. If the server goes down, then the session is lost. If the session has important user information, it can be lost.

Tools Used for Load Balancing

The popular open-source web server, NGINX, can be used as a load balancer to support your web services. NGINX provides extensive documentation to get you started installing and configuring it to load balance traffic to backend servers.

Linode offers a load balancing service called NodeBalancers. Using load balancers as a service (LBaaS) to route your server’s web traffic reduces the amount of configuration you need to worry about. This allows you to focus on developing your application, and take advantage of built-in point-and-click functionality.

If you are using Kubernetes to run your containerized applications, load balancers help you expose your cluster’s resources to the public internet and route traffic to your cluster’s nodes. If you are using Linode’s managed Kubernetes service, LKE, you can configure NodeBalancers using annotations. You can also use NGINX to configure load balancing via ingress on Kubernetes.

This page was originally published on


Your Feedback Is Important

Let us know if this guide was helpful to you.


Join the conversation.
Read other comments or post your own below. Comments must be respectful, constructive, and relevant to the topic of the guide. Do not post external links or advertisements. Before posting, consider if your comment would be better addressed by contacting our Support team or asking on our Community Site.
The Disqus commenting system for Linode Docs requires the acceptance of Functional Cookies, which allow us to analyze site usage so we can measure and improve performance. To view and create comments for this article, please update your Cookie Preferences on this website and refresh this web page. Please note: You must have JavaScript enabled in your browser.