How do I Host a Highly Available Website On Linode?

Linode Staff

I have a website I'm hosting on Linode right now. I've had some downtime recently and I want to know how I can avoid that in the future by making my site Highly Available.

1 Reply

Downtime for a website can be a pain for you and your users especially if you're hosting a critical service. As you mentioned, one of the best ways to avoid downtime is by employing the concept of "High Availability" (HA). In order to make your site highly available, you will be eliminating single points of failure (one instance that does everything) in favor of several separate instances that are configured for redundancy if anything should happen to one of them.

This guide titled Host a Website with High Availability walks you the setup of a highly available website with three application nodes, three file system nodes, and a 3-node database cluster on CentOS 7. Since that distro is hitting End of Life in June of 2024 and the IP failover method suggested is no longer supported in the majority of our data centers, this How-To will add some updates and shortcuts to the original.

What You Will Need

While this guide will walk you through the setup of a High Availability website, it's good to know ahead of time exactly what resources we will be creating and using for this deployment:

  • Three application (Apache2) nodes
  • Three file system (GlusterFS) nodes
  • A three-node GaleraDB cluster w/an additional private IP
  • Since you'll need to contact support to add a second private IP for IP failover, you'll want to make this request this IP as soon as your cluster is up and running.

Create Six Instances

First we want to create six instances. Three of which will serve as our application nodes and three for our file system. This step can take some time when you're creating each instance 1-by-1. As a shortcut, you can use the linode-cli with a StackScript in a 'for' loop to create six secured Linodes in a fraction of the time. You can take the following command below and switch out the info in the brackets for your actual information:

for i in $(seq 1 6);
do
linode-cli linodes create \
     --no-defaults \
     --region us-east \
     --authorized_keys "<your-SSH-key>" \
     --type g6-standard-1 \
     --private_ip true \
     --tags "High A" \
     --image linode/ubuntu22.04 \
     --stackscript_id 692092 \
     --stackscript_data '{"username": "<your-username>", "password": "<yourcomplexpassword>", "pubkey": "<your-SSH-key>", "disable_root": "Yes"}' \
     --root_pass <yourcomplexpassword> \
     --booted true 
done

This will create 6 "vanilla" Ubuntu 22 instances that have a limited user created, your SSH keys uploaded, and root logins disabled. You will still need to login to each one to perform system updates, set the timezone, and configure the hostname. Also renaming these for easier identification is recommended.

Create Galera Cluster

Another time-saving step in this process is to use the Linode Marketplace to deploy your Galera Database Cluster. The deployment configurations will allow you to set up and deploy your cluster within 5-10 minutes. Go ahead and kick that off now if you haven't already.

Gluster Instance Configurations

Pick three of the vanilla instances to be used for your GlusterFS. You can name yours whatever you would like but in this guide, they'll be named gluster1, gluster2, and gluster3.

You'll want to run through the following processes on each GlusterFS instance.

Add private IPs to /etc/hosts:

Edit the /etc/hosts file to add the private IP addresses of each of the GlusterFS nodes as well as the fully qualified domain names and hostnames:

# GlusterFS Nodes

192.0.2.21 gluster2.example.com gluster2
192.0.2.22 gluster1.example.com gluster1
192.0.2.23 gluster3.example.com gluster3

You will want to change the domain from <example.com> to your own FQDN. You can view the private IP addresses from the network tab on the Linode's summary page.</example.com>

Install and Start GlusterFS

Next, install and start the GlusterFS service

sudo apt-get install glusterfs-server -y

Once the installation process is complete, start GlusterFS:

sudo systemctl start glusterd

Use the following command to enable GlusterFS to start on boot:

sudo systemctl enable glusterd

Configure Firewall Rules:

Ubuntu ships with UFW as it's default firewall and since the rules we're implementing are pretty straightforward, that service will fit our needs. Configure the UFW rules to only allow connections from the private IP addresses of your Gluster nodes as well as your the nodes you plan on using as the application servers:

sudo ufw allow ssh
sudo ufw allow from 192.0.2.21
sudo ufw allow from 192.0.2.22
sudo ufw allow from 192.0.2.23
sudo ufw allow from 192.0.2.31
sudo ufw allow from 192.0.2.32
sudo ufw allow from 192.0.2.33
sudo ufw enable

Configure Gluster Service:

Once you've followed all of these steps, you can read through the instructions here to fully configure the service and test file replication between your servers.

Configure Galera Cluster

After your Galera DB cluster has been deployed, there are few configurations that need to be updated. Similar to the previous instances that were deployed with a stack script, you'll want to login to each one to set the timezone, but here you will also need to add a limited user, upload your SSH key, and disallow root logins.

Test replication:

To ensure the cluster has been deployed properly, connect to one of the database nodes and run sudo mysql -u root -p. The password will be the one you used during configuration. When you are connected to the database, run the following command:

SHOW STATUS LIKE 'wsrep_cluster%';

The return should show a wsrep_cluster_size of 3:

+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_cluster_weight       | 3                                    |
| wsrep_cluster_capabilities |                                      |
| wsrep_cluster_conf_id      | 23                                   |
| wsrep_cluster_size         | 3                                    |
| wsrep_cluster_state_uuid   | d058f473-9059-11ee-8ca2-8e20389f8649 |
| wsrep_cluster_status       | Primary                              |
+----------------------------+--------------------------------------+

To test database replication, you can follow the steps found here.

Add Error Logging:

If you would like to allow for error logging for your cluster, you can uncomment the following line in the /etc/mysql/mariadb.conf.d/50-server.cnf file then restart the MariaDB service. Make sure to restart the service on one node at a time so that entire cluster doesn't go down.

log_error = /var/log/mysql/error.log
sudo systemctl restart mariadb

Add Firewall Rules

Unlike the instances we deployed using a StackScript, the GaleraDB cluster uses firewalld to control network access. You will need to create rules to allow your application servers to connect to your database cluster:

sudo firewall-cmd --zone=internal --add-source=192.0.2.31/32 --permanent
sudo firewall-cmd --zone=internal --add-source=192.0.2.32/32 --permanent
sudo firewall-cmd --zone=internal --add-source=192.0.2.33/32 --permanent
sudo firewall-cmd --reload

IP Failover using FRR

In order for your database cluster to continue functioning with your application in the event one node goes down, you need to implement IP failover. This can be done in a couple of different ways, however, the method we'll be using is over BGP with FRR. The instructions for this process are relatively straightforward and can be found here

Apache Servers

Now that your database and filesystem nodes are configured, it's time to shift focus to the instances that will house your web server application. Once you've made the necessary updates and finished securing the servers, you want to rename them (app1/2/3 for this guide) and add the other nodes to their /etc/hosts files.

# Application Nodes
192.0.2.31   app1.example.com   app1
192.0.2.32   app2.example.com   app2
192.0.2.33   app3.example.com   app3

# GlusterFS Nodes
192.0.2.21   gluster1.example.com  gluster1
192.0.2.22   gluster2.example.com  gluster2
192.0.2.23   gluster3.example.com  gluster3

Next, we'll create UFW rules for several different services as well as private traffic to all the other nodes in your deployment:

sudo ufw allow http
sudo ufw allow https 
sudo ufw allow ssh
sudo ufw allow from 192.0.2.21
sudo ufw allow from 192.0.2.22
sudo ufw allow from 192.0.2.23
sudo ufw allow from 192.0.2.31
sudo ufw allow from 192.0.2.32
sudo ufw allow from 192.0.2.33
sudo ufw allow from 192.0.2.41
sudo ufw allow from 192.0.2.42
sudo ufw allow from 192.0.2.43
sudo ufw enable 

Install Services

Use the following command to install the Apache2 web server on the app nodes:

sudo apt install apache2 -y

Next, install the GlusterFS client so your app servers can communicate with your filesystem.

sudo apt install glusterfs-client -y

Mount GlusterFS

Next, we will need to make sure the filesystem mounts to your application nodes when they boot by editing the /etc/fstab file:

gluster1:/example-volume  /srv/www  glusterfs defaults,_netdev,backup-volfile-servers=gluster2:gluster3 0 0

We then create the mount point:

sudo mkdir /srv/www
sudo mount /srv/www

Configure Webserver Files

Now that we've mounted our shared filesystem to /srv/www, we need to edit the Apache2 configuration files in each of your application servers so this directory is available to be served to the internet.

In the /etc/apache2/apache2.conf file, uncomment and add the following lines and insure the Directory is /srv/www/:

<Directory /srv/www/>
        Options Indexes FollowSymLinks
        AllowOverride None
        Require all granted
</Directory>

Next, we'll create a new website configuration file with the intended domain name of your site from the default Apache file and backup the default file

sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/example.com.conf && sudo mv /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/000-default.bak

Within your new website's .conf file, the DocumentRoot should be changed to the following:

DocumentRoot /srv/www/

Next remove the symlink currently in place for the default page and replace it with the configuration file for the new site:

sudo rm /etc/apache2/sites-enabled/000-default.conf && sudo a2ensite example.com.conf
sudo systemctl reload apache2

To test your configurations, go ahead and create a file in the /srv/www/ directory:

sudo touch /srv/www/test.txt

Now when you travel put a public IP address of any of your three application nodes into your browser, you should be taken to a page that says "Index Of /" and a hyperlink with your test.txt file. The test file can now be deleted.

Configure your NodeBalancer

The last step in making your website highly available is to put your entire deployment behind a NodeBalancer. This will balance the traffic load between your three application nodes and ensure none of your instances gets overloaded. This Getting Started with NodeBalancers guide will walk you through getting your backends configured behind your NodeBalancer.

I suggest following the instructions in this guide to configure TLS/SSL Termination on your NodeBalancer

And there you have it! With your setup behind a NodeBalancer, you can now migrate your existing website data to your highly available deployment.

[Optional] Install wordpress

As an additional exercise, we'll go through the steps of installing WordPress on our HA deployment since it's a popular CMS and this process may be helpful for some folks. You'll want to install the MariaDB client on all three app nodes:

sudo apt install mariadb-server -y

Now install PHP and it's dependencies:

sudo apt install php php-mysql php-gd php-curl php-gd php-mbstring php-xml php-xmlrpc php-soap -y

Update the DocumentRoot location in the /etc/apache2/sites-available/example.com.conf files and restart Apache:

DocumentRoot /srv/www/wordpress
sudo systemctl restart apache2

On one of your application nodes, download and extract the latest version of WordPress into the shared folder

sudo wget http://wordpress.org/latest.tar.gz -O /srv/www/latest.tar.gz
sudo tar -xvf /srv/www/latest.tar.gz -C /srv/www/

Change the owner of the /srv/www/ directory to the www-data user and restart Apache once more:

sudo chown -R www-data:www-data /srv/www/
sudo systemctl restart apache2

Next, on one of your database nodes enter the MySQL shell with the sudo mysql -u root -p command. Create a database to be used by your application nodes similar to this example:

CREATE DATABASE wordpress;
CREATE USER'wordpress'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress'@'%';
FLUSH PRIVILEGES;

Be sure to change password to a secure password you will use to access your WordPress DB.

Finally, change the bind-address in the /etc/mysql/mariadb.conf.d/50-server.cnf file to the database server’s shared private IP to configure the MariaDB to accept remote connections:

bind-address    = <shared.IP.address>

You will input the shared IP as the Database Host during the initial setup.

Now, create A and AAAA records for your NodeBalancer's IPv4 and IPv6 addresses. Once those have propagated, you will be able to start the setup of your WordPress site! You can check out this guide for an additional reference on setting up WordPress on an Ubuntu instance.

You may notice the CSS of your site looking a bit off and some "Mixed Content errors" if you look in the dev tools. To remedy this, you can add the following line near the top of the wp-config.php file after the browser-based installation process:

if (strpos($_SERVER['HTTP_X_FORWARDED_PROTO'], 'https') !== false)
       $_SERVER['HTTPS']='on';

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct