Cannot get LVS load balancing to work
I am using NAT (masq) mode.
I have my linode public IP setup as my VIP and I have it forwarding to another linode on the private / internal IP.
I can see the traffic coming in on the balancer, and it looks like it's attempting to send it out again on the internal network, however, the packets never seem to get to the real server.
I have the following set in my /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.eth0.send_redirects = 0
Has anyone had any luck getting LVS / IPVSADM to work on Linode?
Btw. I haven't even started looking at the return path yet (i.e. from the real server back to the balancer and out to the client). I am just trying to get the packets to go from the client -> balancer -> real server for now.
FYI,
If I change to tunneling mode, I do see the packets get to the real server, however, tunneling mode won't work for me. In gateway and NAT mode, the packets never get to the real server (I tried gateway just to see if it would work, but I need NAT for my final use case).
I tried all combinations of public->public, public-> private etc to try getting NAT to work, but it never does (i.e. routing to the real server using the real server public IP vs. the private IP etc).
Thanks!
3 Replies
I've been trying to get this to work and am getting the same results as y3n: with LVS-NAT and LVS-DR the masqueraded or re-MACed packets never make it to the real server. With LVS-Tun the packets do make it to the real server, but then the response packets seem (which should go direct to the client, like with LVS-DR) are just disappearing.
I was hopeful that LVS-DR, since it's just done at the link layer rather than the IP layer, might get through the linode networking stack ok, but it still seems to be being filtered somewhere. This tcpdump shows the mac addresses getting rewritten (correctly), while the ip details stay the same:
$ tcpdump -nn -e -i eth0 tcp port 443
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
01:02:00.124061 84:78:ac:5a:0b:41 > f2:3c:91:33:32:14, ethertype IPv4 (0x0800), length 74: 203.206.211.85.54024 > 45.56.71.79.443: Flags ~~, seq 3934105594, win 14600, options [mss 1452,sackOK,TS val 531657264 ecr 0,nop,wscale 7], length 0
01:02:00.124092 f2:3c:91:33:32:14 > f2:3c:91:33:70:45, ethertype IPv4 (0x0800), length 74: 203.206.211.85.54024 > 45.56.71.79.443: Flags ~~, seq 3934105594, win 14600, options [mss 1452,sackOK,TS val 531657264 ecr 0,nop,wscale 7], length 0
y3n didn't mention it above, but I've also tried using linodes IP Failover facility to add the VIP (45.56.71.79 above) to the real servers, but to no effect.
I'll keep digging, but if anyone has any bright ideas, I'd love to hear them.
Cheers!~~~~
I didn't look at the IP Failover facility, but I suspect (given how most IP failovers work), that it won't work for load balancing since it kicks in when the primary fails, and you're not going going to get actual load distribution (at least from my understanding).
I really do hope that Linode resolves this problem, since proper load balancing is critical to hosting.
- Les