How can I use Proxy Protocol with Linode Kubernetes Engine?

Question: How can I use Proxy Protocol with Linode Kubernetes Engine?

10 Replies

NodeBalancers and Linode Kubernetes Engine now support Proxy Protocol. This means that that your Kubernetes services, even behind Ingress controllers, can see the true client IP addresses of your users.

In this tutorial we will create a demo Service called "mybackend" running httpbin, allow it to be reached via Ingress, and see that it sees our true client IP.

This tutorial also includes a demo of reaching the service both external to the cluster and internal to the cluster. The technique used is the solution if you see any broken header: errors when attempting to reach the service via Ingress inside the cluster.

Create a Kubernetes cluster with Linode Kubernetes Engine and download the Kubeconfig

Install ingress-nginx on your cluster via Helm

$ export KUBECONFIG=my-cluster-kubeconfig.yaml
$ helm repo add ingress-nginx
$ helm install --generate-name ingress-nginx/ingress-nginx

Add an A record for our demo using your DNS provider.

type  name       data         ttl
A     mybackend  <public-ip>  300 sec

The field values:

  • public-ip comes from the EXTERNAL-IP for ingress-nginx seen with kubectl get services -A
$ kubectl get services -A
NAMESPACE     NAME                                            TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
default       ingress-nginx-1602681250-controller             LoadBalancer     <public-ip>   80:30003/TCP,443:30843/TCP   11m
  • mybackend is a name that you choose for the service
  • Choose a short TTL, so that we can interact with this domain in a few minutes

This will soon allow us to reach our service at

Deploy a backend Service with an Ingress resource

We will use httpbin as a demo backend service which by default echoes to us our client IP address when we make a request.

Your edits at this step:

  • Set the value of host: in the Ingress resource to the domain name that you chose above
  • No other edits should be made to this manifest
# mybackend.yaml

kind: Ingress
  name: mybackend-ingress
  annotations: "nginx"
    - host:
          - path: /
              serviceName: mybackend-service
              servicePort: 80
apiVersion: v1
kind: Service
  name: mybackend-service
    app: httpbin
    - protocol: TCP
      port: 80
apiVersion: apps/v1
kind: Deployment
  name: mybackend-deployment
  replicas: 3
      app: httpbin
        app: httpbin
        - name: httpbin
          image: kennethreitz/httpbin
            - containerPort: 80

Apply this manifest with kubectl apply -f mybackend.yaml

Hit in a local web browser.

This is an httpbin endpoint which displays request information, including the client IP and all HTTP headers. Note that the client IP, called "origin", will either be in the Kubernetes Pod network of, or will be an IP address of one of the Nodes in your cluster. This is the IP of ingress-nginx using the Kubernetes Service network to reach the backend.

Enable Proxy Protocol for ingress-nginx

We now want our backend service to see the true client IP, so we enable Proxy Protocol for ingress-nginx.

Add the Proxy Protocol annotation to the ingress-nginx service. The name of the service is obtained with kubectl get services -A.

$ kubectl edit service ingress-nginx-1602681250-controller
  # Add the Proxy Protocol annotation to the annotations section
    ... v2

Configure ingress-nginx to expect Proxy Protocol data. The name of this configmap can be found with kubectl get configmaps -A.

$ kubectl edit configmap ingress-nginx-1602681250-controller
# Add the data section at the root level of this yaml, which might not yet exist
  use-proxy-protocol: "true"

Hit again in a web browser.

You will see your own machine's IP in the "origin" section of httpbin's info. Proxy Protocol successfully enabled!

  "args": {}, 
  "headers": {
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9", 
    "Accept-Encoding": "gzip, deflate", 
    "Accept-Language": "en-US,en;q=0.9,ja;q=0.8", 
    "Host": "", 
    "Upgrade-Insecure-Requests": "1", 
    "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36", 
    "X-Forwarded-Host": "", 
    "X-Scheme": "http"
  "origin": "", 
  "url": ""

Note that at this point in-cluster clients cannot reach the service via the public hostname.

$ kubectl run myshell --image busybox --command sleep 10000
$ kubectl exec -ti myshell -- sh
/ # wget
Connecting to (<public-ip>:80)
wget: error getting response: Resource temporarily unavailable
/ # exit
$ kubectl logs ingress-nginx-1602681250-controller-8df8684fc-5xbmd
2020/10/14 14:14:37 [error] 187#187: *11787 broken header:

The name of the ingress-nginx pod can be found with kubectl get pods -A.

These broken header: messages indicate that the in-cluster client is not sending Proxy Protocol data to ingress-nginx, which now expects it.

Fix this problem by using the Service hostname instead of the public hostname.

Instead of using the Ingress hostname, clients which are inside your cluster should use the Service hostname for the backend service. This bypasses ingress-nginx entirely, avoiding the need for the client to send Proxy Protocol headers.

$ kubectl run myshell --image busybox --command sleep 10000
$ kubectl exec -ti myshell -- sh
/ # wget http://mybackend-service.default.svc.cluster.local/get
/ # cat get
  "args": {},
  "headers": {
    "Connection": "close",
    "Host": "mybackend-service.default.svc.cluster.local",
    "User-Agent": "Wget"
  "origin": "",
  "url": "http://mybackend-service.default.svc.cluster.local/get"

You will see that in this case the request succeeded and that your "origin" is your Pod IP (the Pod IP of this busybox Pod) inside the cluster.

Here svc.cluster.local is the default domain name for the Service network in Kubernetes, you should be able to use that portion of the hostname without any modifications. Similarly, default is the name of the default namespace in Kubernetes, if your backend Service resides in a different namespace, then you can substitute that namespace name in this URL. The DNS records for this hostname are queried automatically via CoreDNS running in this cluster.

At this point you are able to reach the backend from both public and private clients:

Service hostnames should always be the preferred way to reach Services inside a Kubernetes cluster. This is the mechanism for service discovery in Kubernetes.


I currently have the same exact setup as outlined in these steps, except I require SSL. Thus, I have installed cert-manager and an issuer (using LetsEncrypt). Without proxy_protocol enabled on the LoadBalancer, SSL works fine but I can't receive client IP.

With proxy_protocol enabled on the LoadBalancer, however, I get an ERR_SSL_PROTOCOL_ERROR no matter what I do.

You can see this at (which points to my ingress controller/load balancer)

Postman shows
GET Error: write EPROTO 4620207552:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../../vendor/node/deps/openssl/openssl/ssl/record/ssl3_record.c:252:

Can you please help? SSL is very important for me and I am sure other LKE users will find this helpful as well

NOTE: I posted a more thorough question here

Here's a values.yaml you can feed helm when installing ingress-nginx to configure the NodeBalancer and ingress-nginx for proxy protocol right from the start:

    use-proxy-protocol: "true"
    annotations: v2

Hi, any chance you could provide instruction on how to do this with Traefik ingress controller?


any chance we can get the same tutorial using traefik,

Sadly, we have not yet had any robust solution so far.
In most cases using local URLs (like http://mybackend-service.mybackend-namespace.svc.cluster.local/) is good, but not always. It described two use cases, like private docker registry and cert-manager. Also, if you have many services, it means you have to review all of them and update URLs. Even more, sometimes you have to use https only, and without ingress, it is not the case.

hairpin-proxy is not maintained anymore. There is no ingress-nginx annotation for Linode to use the hostname for LoadBalancer like it is implemented for Hetzner (

The hairpin-proxy was not working in my cluster (1.28), because the "coredns" ConfigMap needs now to be called coredns-custom with another format.

I made a fork with this changes, now it works

Hey @ws900
I'm facing an issue where all our certs stoped auto-renewing, tried upgrading to hairpin-proxy to your forked version but unable, as you didn't publish a corresponding tag.

If however I download the zipped files I can't run it as it's not on

Following error occours:
Failed to pull image "ws900/hairpin-proxy-controller:0.2.1":

There's a discussion of this same issue here:

I've fully integrated a working fork with a public container image here and a new v0.3.0 tag:


Please enter an answer

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct