Configure a node balancer for Red-Black server deployment with Apache
Hello all, i'm trying to set up Red-Black deployment on Linode for my numerous web apps.
Currently i have a Red web server, a Linode instance running Ubuntu 22 with Apache 2.2 and serving my numerous production web apps, mostly Python Dash apps.
I also have a Black web server, another almost identical copy of the Red web server, that acts as a testing ground for app and system updates.
Eventually i want to direct all web traffic to Red as the new production server and use Black for testing the next round of app and system updates, and so on, swapping the production and testing roles of the web servers with each round of major changes.
That way there's very little downtime.
How to do this easily, keeping in mind there are more than 20 web apps running on Red on different (SSL) domains and subdomains?
Linode recommends using a node balancer; see e.g. https://www.youtube.com/watch?v=JlXgl_rtM_s , but i'm struggling to set this up correctly.
More specifically, as a first milestone toward my Red/Black deployment goal, i want the node balancer to handle traffic for an app on Black at the SSL domain https://mrcagney.dev.
I can get the app working on that domain without the node balancer no problem.
But it fails with the node balancer, as you can see for yourself by visiting the domain.
On the node balancer i have two configurations, one for port 80 and one for port 443.
Both point to the IP address of Black.
The port 443 configuration has an SSL certificate generated by Let's Encrypt for the domain.
On Black, the Apache web server has two configuration files, one for port 80 and one for port 443.
The files differ only on the first line, with <virtualhost: *:80=""> on one and <virtualhost: *:443=""> on the second.
That's because, as i understand it, the signal from the node balancer to Apache is unencrypted.</virtualhost:></virtualhost:>
Do you know what i'm doing wrong?
Thanks for your attention.
Without seeing all of the related configurations, it's hard to say, but I think the starting point is figuring out if you're configuration is using SSL Termination or SSL Pass through.
SSL Terminating on the NodeBalancer
If SSL is terminating on the NodeBalancer, which you can read more about in this guide, you'll want to make sure you choose the right configurations for both the NodeBalancer and the backend nodes. I want to point out this section from our guide to Getting Started with Nodebalancers
If you are using the HTTPS protocol, TLS termination happens on the NodeBalancer and your Compute Instances should only need to listen on port 80 (unencrypted). If that’s the case, backend nodes for both inbound ports can be configured to use port 80.
While you should create configurations for both port 80 using HTTP and port 443 using HTTPs on the NodeBalancer itself, the actual compute instances should only be using port 80. The section where you configure the Backend Nodes under each of the two configurations should be configured to listen on port 80 for both HTTPs and HTTP.
SSL Terminating on the Backend Nodes
If you're using TLS/SSL pass-through to terminate the HTTPS connection on the backend nodes, select the TCP protocol instead of HTTPS. You'll then configure the SSL Certificate on the servers. If this is the option you've gone with, the issue more likely lies in the Apache configuration, which that guide will help with. Using TCP is a good option if you want to use proxy protocol.
Here are some additional resources that I think can help you figure this out:
- Backend Nodes
- Configuration Options for NodeBalancers
- Delicious Brains - Let's Encrypt + NodeBalancers
Finally, if this isn't working for whatever reason, you may want to look into an alternative. Since this isn't the intended purpose of NodeBalancers–which are designed to balance traffic between backend servers–it's possible there's an answer that would work better for you. For example, you may be able to set something up with IP Sharing or even just writing a script/cronjob that uses our API to swap the IPs of the two servers at a time when traffic is likely very low.