ssl node balancer issue
I've had a node balancer sitting in front of two app servers running smoothly for many years. SSL connections from the outside world (coming in port 443) terminate at the node balancer and are routed via port 444 to the internal app servers.
All of a sudden the node balancer is showing these servers are down for https . http port 80 still works (but website forces traffic to https so effectively the whole site is down).
I don't have a lot of experience with this stuff so not sure where to start debugging.
No configs have changed recently. I have tried rebooting everything. Normal http traffic works fine. I checked my ssl cert is still valid.
I did do a:
tail -f /var/log/apache2/access.log
and see a ton of traffic from 192.168.255.xx (I think this is an internal ip?)
are the failing health checks for port 444? and why use a separate port for https requests at all, since all the requests are being transmitted over http anyway?
I did an netstat -pltnu and see server is listening on 80,443, and 444 so I don't think that is the issue. The node balancer is using port 444 to route the traffic to the web servers but for some reason it is not seeing them
Does seem to have to do with the health check. But not sure whether it's really going down (a real issue) or whether it's a false positive health check. I turned on ip throttling (linode setting on node balancer was 0, set to 1) but not sure what else to do
xxx.xx.xx.xx - - [13/Mar/2018:16:55:27 -0700] "GET /urlremoved HTTP/1.1" 500 21227 "-" "python-request
I suspect these status fails must be causing the health checks to fail.
I'm guessing someone is scraping the site? apache top says I'm getting 3.33 requests per second this url but it serves a potentially large file depending on the parameter. The ip address in the log is an internal ip adress (from the node balancer I assume).
What's the best way to block this ip? All traffic seems to come from that IP since this is an internal redirect from the outfacing node balancer