Securing k8s cluster

Hi,

I have created k8s cluster using following command.

linode-cli k8s-alpha create example-cluster --nodes=20

Now before making it live on the production, I want to make it totally secure. I checked if I deploy any app on it it is all open for the world by default. If I try to add any iptables rules then k8s cluster crashes.

How can I make it secure?

7 Replies

Currently it does make sense for you to add iptables rules to your cluster Nodes if you feel the need to. For example, you can disable ssh this way. However all listening Kubernbetes (control plane) services running on the Nodes are authenticated with mutual TLS, and your workloads can only be accessed via authenticated Calico IP-IP tunnels, unless you have exposed the workload to the Internet with a "Service" (see below).

These ports must be left open:

  • TCP port 10250 inbound from 192.168.128.0/17, Kubelet health checks
  • UDP port 51820 inbound from 192.168.128.0/17, Wireguard tunneling for kubectl proxy
  • TCP 179 inbound from 192.168.128.0/17, Calico BGP traffic
  • TCP/UDP port 30000 - 32767 inbound from All, NodePorts for workload Services

If you find that your cluster is non-functional with these ports exposed, please let me know.

If you don't want a workload (Deployment, StatefulSet, or Daemonset) exposed to the Internet, then you can delete the corresponding Service object or change the Service type to ClusterIP. The pods will continue to execute on the cluster and have Pod IPs within the cluster in the range 10.2.0.0/16. If you retain a ClusterIP Service for the workload they will also be fronted by a distributed proxy IP in the range 10.128.0.0/16 within the cluster. You can reach these IPs on the cluster Nodes, or using the kubectl proxy or kubectl port-forward commands from your local machine. This is absolutely appropriate for in-cluster only services such as databases or backend services. These services can be secured within the cluster using NetworkPolicy, in the case of LKE this is currently implemented by Calico.

You can optionally expose a ClusterIP service with an Ingress resource, which will be automatically proxied by an Ingress controller that you deploy. The Ingress controller will itself be a NodePort or LoadBalancer Service. If you choose ingress-nginx for example, then you can have it terminate TLS for your services by associating them with TLS Secrets.

The following is the most robust way to expose workloads to the Internet with Kubernetes:

Internet traffic to
-> Ingress controller LoadBalancer Service (automatically configured NodeBalancer) to
-> Workload ClusterIP Service to
-> Workload Pods

This way, your workloads can only be reached from Internet via Ingress, and expose only a Service network IP in the range 10.128.0.0/16, which can only be reached from within the cluster. All services also have DNS names within the cluster, which is the preferred way to reach them.

If you do expose your workload with a NodePort or LoadBalancer Service (without using an Ingress), then it's up to you to ensure that the workload has appropriate authentication (via TLS or other means). You can do this by configuring the workload resource (Deployment, StatefulSet, or DaemonSet) with TLS material and authentication configuration that will be specific to the workload that you're running (for example a database or web application).

As a recap, Kubernetes workloads can be reached in one or more ways.

  • By Pod IP, 10.2.0.0/16, in-cluster only.
  • By Service IP, 10.128.0.0/16, in-cluster only and if the workload has a an associated Service resource of type ClusterIP, NodePort, or LoadBalancer.
  • By NodePort, a port on the Nodes in the range 30000 - 32768, from the Internet if the workload has an associated Service resource of type NodePort or LoadBalancer.
  • By LoadBalancer, an automatically configured NodeBalancer, from the Internet if the workload as an associated Service resource of type LoadBalancer.
  • By Ingress (an HTTP hostname or path), fronted by an Ingress controller of your choice, if the workload has an associated Service and Ingress resource. The Ingress controller should be deployed as LoadBalancer service and your workload should be deployed as a ClusterIP service.

For more detail, please refer to the Kubernetes documentation on Services.

Thanks dude,

These ports must be left open:

TCP port 10250 inbound from 192.168.128.0/17, Kubelet health checks
UDP port 51820 inbound from 192.168.128.0/17, Wireguard tunneling for kubectl proxy
TCP 179 inbound from 192.168.128.0/17, Calico BGP traffic
TCP/UDP port 30000 - 32767 inbound from All, NodePorts for workload Services

But what about DDoS kind of attacks?

DDoS is mitigated by Linode's built-in DDoS detection and prevention systems.

Here's an additional note on firewalling LKE:

In an LKE cluster, both of the following types of workload endpoints cannot be reached from the Internet:

  • Pod IPs, which use a per-cluster virtual network in the range 10.2.0.0/16
  • ClusterIP Services, which use a per-cluster virtual network in the range 10.128.0.0/16

All of the following types of workloads can be reached from the Internet:

  • NodePort Services, which listen on all Nodes with ports in the range 30000-32768.
  • LoadBalancer Services, which automatically deploy and configure a NodeBalancer.
  • Any manifest which uses hostNetwork: true and specifies a port.
  • Most manifests which use hostPort and specify a port.

Exposing workloads to the public Internet through the above methods can be convenient, but they can also carry a security risk. You may wish to manually install firewall rules on your cluster nodes; to do so, please see this community post. Linode is developing services which will allow for greater flexibility for the network endpoints of these types of workloads in the future.

I'd like to chime in with some additional info from some experiments I've ran on LKE.

As mentioned above LKE nodes are quite open by default (e.g. having SSH port open). On the other hand I wasn't able to find information in the documentation how can I set StackScript for those nodes because they seem to be provisioned from some Linode's internal templates (I'm happy to be corrected on this :) ) For me this means an open SSH port with unknown SSH configuration.

I wanted to automate/simplify node provisioning as much as possible so I tried to figure out how can change the node configuration after it's provisioned via automations. Luckly kubectl node-shell (https://github.com/kvaps/kubectl-node-shell) works just fine so I was able to use nsenter to change things on the node directly. It's not ideal because it's not tied to node's provisioning but it's good enough for now, considering there is a firewall on the Linode's roadmap.

What I did was create a DaemonSet which will apply my custom firewall on every node (periodically) with the firewall being added as a ConfigMap (both in kube-system namespace). I could've decided to do other things (like shutting down SSH to the node since I don't plan on using it) but firewall seemed like the most universal thing here.

After some trial and error I've determined that nsenter --target 1 --net -- sh /path/to/my-firewall-script.sh was sufficient (--net needed to modify node's iptables), after making sure the container has iptables (using basic Apline image). I didn't use --mount option because it changes your paths and accessing ConfigMap mounts is a bit more problematic (they are hidden in the pod's dir on host). I've used the rules with ports listed above.

Despite being a somewhat hacky solution, this way every new node will eventually get the proper firewall (a matter of starting the DaemonsSet pods on it).

I wrote here both to bounce the idea by more people and see if maybe someone had a better, less hacky idea. If it inspires someone to come up with a better solution, I'll be happy to check it out too.

iptables used and work for me based on the info listed by asauber above:

    iptables -vD INPUT -j node-firewall
    iptables -vF node-firewall
    iptables -vX node-firewall
    iptables -vN node-firewall
    iptables -vA INPUT -j node-firewall
    iptables -vA node-firewall -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
    iptables -vA node-firewall -p tcp --dport 10250 -s 192.168.128.0/17 -j ACCEPT
    iptables -vA node-firewall -p udp --dport 51820 -s 192.168.128.0/17 -j ACCEPT
    iptables -vA node-firewall -p udp --dport 179 -s 192.168.128.0/17 -j ACCEPT
    iptables -vA node-firewall -p tcp --dport 30000:32767 -j ACCEPT
    iptables -vA node-firewall -p udp --dport 30000:32767 -j ACCEPT
    iptables -vA node-firewall -i eth0 -j REJECT

I have experienced 3 times that when there is problem with linode network, my LKE cluster with firewalld on will have problem. The problem is dns not working, even after I restart the core-dns.

Is there any advise on how to properly secure the LKE cluster? I need some nodeport to be accessible only by VM in Linode but not accessible from internet

Hello, I have an issue very much related to this! I'm trying to secure an LKE cluster using Calico's GlobalNetworkPolicy, with rules based on this guide:

  • Allow all traffic from 192.168.128.0/17 so any host in my private network can talk to the k8s cluster: I believe this covers the first 3 recommended rules from @asauber. I also allow the pod/service CIDRs for good measure.
  • Deny all other incoming traffic (I don't want any ports open to the internet).
  • Allow all outgoing traffic from k8s.

Here are the effects of that:

  • A NodePort service is available from within the private network, but blocked when using the host's public IP -- awesome.
  • Traffic from a NodeBalancer can still come in as it's from 192.168.whatever.
  • AFAICT the cluster & overlay network magic runs normally.
  • I can still interact via kubectl, and SSH works: because of Calico's nice failsafe rules I assume.
  • Here's the problem: kubectl exec no longer works, in any case where it did just before applying the rule. I was using this for calicoctl, and when the "Deny" rule is active it just hangs. If I kubectl edit globalnetworkpolicy default.drop-other-ingress to effectively disable the rule, that works to open it up again.

So I'm puzzled, and don't understand the workings of kubectl exec well enough to know why that would be affected where other kubectl commands aren't. In either case it connects to the K8s API server, which then talks to the individual nodes, right?

  • The API server is allowed by a failsafe rule for port 6443. I'm speculating here but if exec uses a different port and doesn't come from a 192.168.128.0/17 address, then it might be blocked.
  • I tried allowing the API server's public IP with no luck, but can't find a private IP to know one way or the other.

Any advice appreciated - thanks in advance!

NB. I updated Calico to 3.16 to get automatic hostendpoints & it seems to be working well after tweaking the env/config to its original state.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct