Are there any instructions to Setup a VPN between Linodes across data Centres?
We have a Docker Swarm running on Linode where the workers and load balancers are spread across multiple zones.
We're terminating SSL at the load balancers and would basically like to encrypt all traffic between the load balancers and the Swarm Linodes (which are also split geographically).
Setting up a VPN, I understand, is the only way to accomplish this to get true HA across the zones.
This guide https://www.linode.com/docs/networking/vpn/set-up-a-hardened-openvpn-server/ seems to have been written recently and looks like just what we are after.
Are there any other solutions that you might recommend?
I would recommend this:
Mostly under "Site-to-Site" and "Host-to-Host" I would think.
There are other things to consider - i.e. what's your topology and how do you want encryption to fit into that, and performance / overhead.
And then there is WireGuard - which is still in development, but getting more and more popular every day (personally I don't have any experience with it).
Thank you @kmansoft. Looks very similar to the OpenVPN setup.
The Docker Managers and Workers communicate with each other over an overlay network that Docker Swarm creates automatically, so I think connecting the Swarm nodes over a public IP and letting Docker do it's own overlay networking is fine, as long as I restrict traffic to all those nodes come from each other, the load balancers and other trusted IPs.
I'm not entirely sure of the topology at this point. If you see https://1drv.ms/b/s!AmrVmjAdc7P2wJEfaS-7dfUVQLqJpA, I basically need all the traffic with the green arrows to be secured. In the Host-to-Host setup, that would mean that each load balancer has 7 remote_addresses - is that even possible?
If you see https://1drv.ms/b/s!AmrVmjAdc7P2wJEfaS-7dfUVQLqJpA, I basically need all the traffic with the green arrows to be secured
"This 1drv.ms page can’t be found"
Those green arrows (which I can't see) - are they point to point connections (i.e. node balancer at 220.127.116.11 to k8s at 18.104.22.168)?
For point to point I would prefer IPSec to OpenVPN:
OpenVPN creates tunnel devices with their own "inside" addresses on both sides. You'd need routing rules to direct traffic into those tunnels.
IPSec can operate in "transport" mode where you can add encryption on top of an existing "ip A to ip B" connection without having to set up routes. There are no new interface devices.
Any apps at "A" or "B" will then just continue to work, without any "oh, our secured IP to reach the back-end is now C not B, better update A's config". Apps will deal with their traffic before / after encryption / decryption same as before.
What if something goes wrong?
IPSec can be turned off just as easily - if needed, to see if that fixes things, or dig into your other components, etc.
IPSec is built into the Linux kernel, once a connection is up and traffic is flowing, it's handled in the kernel.
Fewer context switches - better performance - lower costs.
Now I don't work anywhere that sells IPSec anything, just giving my opinion.
Thank you again, @kmansoft. For the OneDrive link, let's try it without a , - https://1drv.ms/b/s!AmrVmjAdc7P2wJEfaS-7dfUVQLqJpA
IPSec certainly seemed less complicated to setup than OpenVPN (although it seems to have become easier with easyRSA).
But ZeroTier seemed to have done the trick. Now SSL is terminating at the load balancers and they communicate with the swarm nodes over private IPs. And we even get a clean UI to manage the private network.