A few questions about NodeBalancers and gRPC traffic.
For starters, does the NodeBalancer support proxy protocol (nginx) or gRPC proxy? How can I pass gRPC traffic through NodeBalancer in HTTPS mode to k8s cluster?
Also, I want the NodeBalancer to set the real-ip (x-forwarded-id) to HTTP header into gRPC traffic.
Finally, does the NodeBalancer support HTTP 2.0?
Does the NodeBalancer support proxy protocol (nginx)?
Sure does! Here's our guide that can help you get things set up:
I want the NodeBalancer set real-ip (x-forwarded-id) to HTTP header into gRPC traffic.
You may certainly achieve this using our NodeBalancer. For you reference, here's our guide our GitHub page for the LKE NodeBalancer. This also includes an example of a
.yaml file. If you make the change from Cluster > Local, this should allow you to pass along the client's real IP address:
You may also find these pages helpful:
Does the NodeBalancer support gRPC proxy?
How can I pass gRPC traffic through NodeBalancer in HTTPS mode to k8s cluster?
Our NodeBalancer doesn't inherently support gRPC proxy. However, you may configure this traffic to be routed using an ingress controller. This may also allow you to pass that gRPC traffic to your k8s cluster.
The following post from StackOverflow may guide you in setting up and configuring this controller:
Additionally, this GitHub page demonstrates how to route this traffic:
Finally, does the NodeBalancer support HTTP 2.0?
At this time, our NodeBalancers do not support HTTP 2.0
However, you may find this resource to be helpful in achieving your desired configuration:
Depending on your use-case, you may also achieve this with HAProxy. Here's some guides from their site:
Also, here's our guide on How to Use HAProxy for Load Balancing with Linode.
Is this true?
I tried setting
externalTrafficPolicy: Local but in my Nginx logs I see an internal IP:
It's simple HTTP traffic.
Seems it doesn't work as it should. Can you comment @dcortijo?
Hey there -
It's tough to say for sure what might be going on because I don't have access to your configurations. That's why this type of thing falls a little out of scope for our Support team - but I'm going to do my best to help out as much as I can.
First, check out this guide which shows what you can add to your Nginx configuration:
These are the lines it suggests adding:
Here are a couple other guides I found which might help you address this:
This one deals with Nginx specifically:
This is just some preliminary guidance to see if it helps you out. Please feel free to respond here with any successes or failures you have with this.
It had a few nice pointers, thank you.
I still only see the NodeBalancer's IP. So no luck so far :) Anything that you can spot in my config?
At this point my Nginx config has the following:
real_ip_header X-Forwarded-For; real_ip_recursive on; set_real_ip_from 192.168.255.22;
The log lines are the following:
laszlo 192.168.255.22 - - [31/Aug/2020:09:26:27 +0000] "GET / HTTP/2.0" 200 448 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:80.0) Gecko/20100101 Firefox/80.0" 14 0.001 [infrastructure-headers-8080]  10.2.0.30:8080 662 0.000 200 c8b737bc407951e6f5a56cf09b31b4d6" "-" [192.168.255.22]
with the following log format:
log_format upstreaminfo 'laszlo $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $requ
est_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_
status $req_id" "$http_x_forwarded_for" [$proxy_add_x_forwarded_for]';
I'm also debugging all headers:
GET / HTTP/1.1 Host: headers.test.1clickinfra.com X-Request-ID: c8b737bc407951e6f5a56cf09b31b4d6 X-Real-IP: 192.168.255.22 X-Forwarded-For: 192.168.255.22 X-Forwarded-Proto: https X-Forwarded-Host: headers.test.1clickinfra.com X-Forwarded-Port: 443 X-Scheme: https user-agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:80.0) Gecko/20100101 Firefox/80.0 accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 accept-language: en-US,en;q=0.5 accept-encoding: gzip, deflate, br upgrade-insecure-requests: 1 cache-control: max-age=0 cookie: __cfduid=xx
I'm using the Nginx Ingress Helm chart:
apiVersion: helm.fluxcd.io/v1 kind: HelmRelease metadata: name: nginx namespace: infrastructure spec: releaseName: nginx chart: repository: https://kubernetes-charts.storage.googleapis.com name: nginx-ingress version: 1.41.2 values: controller: service: externalTrafficPolicy: Local config: enable-real-ip: "true" use-forwarded-headers: "true" proxy-real-ip-cidr: "192.168.255.22" log-format-upstream: 'laszlo $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id" "$http_x_forwarded_for" [$proxy_add_x_forwarded_for]'
If HTTP or HTTPS is selected, the NodeBalancer will add an X-Forwarded-Proto header, with a value of either http or https, to all requests sent to the backend. The header value is based on the type of request (HTTP or HTTPS) originally received by the NodeBalancer.
Looks like with TCP NodeBalancer the X-Forwarded-For is not set. According to https://www.linode.com/docs/platform/nodebalancer/nodebalancer-reference-guide/#protocol
Which makes sense, as I don't think the NodeBalancer sees what is traveling in the HTTPS stream. As in my case I terminate SSL in Nginx behind the NodeBalancer and use TCP NodeBalancer.
On other managed Kubernetes platforms, this is why setting
externalTrafficPolicy: Local on the Kubernetes LoadBalancer is the only thing needed. But if I do that, I see the NodeBalancer IP..
Looks like Linode does not support this usecase, as detailed in this thread: https://www.linode.com/community/questions/366/how-do-i-configure-my-nodebalancer-to-pass-through-ssl-connections-to-the-back-e
I can confirm, that using Let's Encrypt with Linode Kubernetes is or will be a rather common usecase as nicely written in the other ticket:
"The momentum of sites moving to SSL with Let's Encrypt is fairly strong. But with the way Let's Encrypt works, it's much better to handle dynamic cert generation and verification at the app layer where this is more context as to the allowability of a certain domain needing a cert. (e.g. some domains my app should allow, others shouldn't, for security reasons and this can change dynamically)
This means it no longer makes sense for modern apps to SSL-terminate at the load balancer level. This means the load balancer needs to be a dumb TCP connection with faithful reflection of the IP of the end user. This is the direction where standard practice is headed, particularly in Node apps."
From Twitter: "We recently added Proxy Protocol support to NodeBalancers and are working on adding that support to LKE. You can follow this page for updates on the issue: https://github.com/linode/linode-cloud-controller-manager/issues/74" https://twitter.com/linode/status/1300425874202910720