How to restrict network access to a loadBalancer service without ingress?
I have a service with type loadBalancer, which have loadBalancerSourceRanges ips.
but i can still access to the app behind this service from any ip i want.
I tried changing the NodeBalancer to use a proxy protocol, and added the Annotation "service.beta.kubernetes.io/linode-loadbalancer-default-proxy-protocol: v1" To my service configmap.
but now i can't access the app at all.
in all the documentation i see the use of ingress and the instruction to add "use-proxy-protocol". but i don't use ingress. only this service type loadBalancer which have a NodeBalancer in lke.
How can i restrict the network access to my loadBalancer without usind ingress?
3 Replies
I can confirm that the ability to add a Cloud Firewall to your NodeBalancer is currently in development. However, this feature has not yet been released and I don't have an ETA to share.
With that in mind, we have some alternative documentation on securing your LKE cluster nodes using IP tables rules. However you need to leave certain ports open to allow for the management of the cluster.
Hi Linode,
This feature seems to be available here : https://github.com/linode/linode-cloud-controller-manager/tree/main
But, when adding the annotation "service.beta.kubernetes.io/linode-loadbalancer-firewall-acl" to my service (with the allowed ips),
a nodeBalancer is not being created, and the service is stuck on "pending". why?
Also, I deployed my cluster with terraform, but when searching for the linode ccm with ("kubectl get ds -n kube-system) there is no ccm deployment but still - for other loadBalancer services (without the annotation mentioned above), nodeBalancers IS being created. How?
- How can i see logs of the ccm? how can i know if it is deployed or not? even when nodeBalancer is auto created
- If the linode ccm is deployed by default when creating a cluster with terraform- why can't i see it in my cluster?
- How can i restrict network access using this annotation ?
There are several reasons why a service can be stuck pending in Kubernetes. Typically, more information about why that's happening can be seen by describing the service to see if there are any events listed. Without more information, I can't say why your services specifically were stuck pending.
I'd also like try to answer your questions about the CCM.
How can i see logs of the ccm? how can i know if it is deployed or not? even when nodeBalancer is auto created
The CCM is part of your Cluster's Control Plane which is the managed portion of your LKE Cluster, so I don't believe you'll have much access to logs for that. I haven't personally run across examples of that component failing to deploy to a cluster, so I don't think that should be a huge concern. That said, if you try to create NodeBalancer and it fails, you can reach out in a Support Ticket and we can access some additional information from the Control Plane. We would need precise timestamps of when any failures occurred, information about exactly what you did and anything on your end that could be preventing things from working, but we should be able to get some additional information.
If the linode ccm is deployed by default when creating a cluster with terraform- why can't i see it in my cluster?
Users don't have access to the Control Plane components. They are managed on our end and don't show up in the cluster.
How can i restrict network access using this annotation ?
I haven't been able to recreate your issue, but I did see a few other reports of problems with that annotation, so we'll keep investigating that from our side. If we're able to find a bug or an issue in our documentation, we'll address that. In the meantime, if that annotation isn't working for you, you can instead try to use the annotation for a user-managed firewall and deploy the firewalls another way, like with Cloud Manager or our API.
While we're happy to make sure there are no issues with your Control Plane and make sure our documentation is up-to-date, you may also want to try a different method of applying this annotation to see if there is an issue with the tool you're using to manage your cluster.