We’ve added horizontal cluster autoscaling support to the Linode Kubernetes Engine (LKE). This new feature for our managed Kubernetes service gives you the ability to create and destroy nodes in real time based on resource limits. Autoscaling makes managing node pools more efficient, resulting in highly available and stable applications.
Kubernetes allows applications or workloads to scale at both the pod and cluster levels. Horizontal Pod Autoscaling is native to Kubernetes and helps you scale available resources for your containers. Horizontal Cluster Autoscaling lets you automatically scale the number of nodes available in each cluster up or down based on the number of pods available in your cluster by setting thresholds of min and max values for nodes in the node pool.
The Horizontal Cluster Autoscaler for LKE
For easy management of your clusters, the new horizontal cluster autoscaler:
- Scales your cluster resources and node pool for maximum efficiency;
- Optimizes cluster resources by checking regularly for unassigned or unnecessary nodes;
- Provides autoscaler settings without updating your configuration, and can be managed using our API; and is
- Supported by Linode’s dedicated, shared, and high memory compute instances.
How It Works
Enabling the cluster autoscaler is very simple and doesn’t require any changes to your existing configuration.
- Visit the cluster’s details page and click the Autoscale Pool option
- On the Autoscaler menu, change the feature to “on”
- Once enabled, set your minimum and maximum values between 1 and 99
- Save your changes to activate the cluster autoscaler
The min and max values selected for the autoscaler represent a set of Nodes in the node pool. For example, a minimum of 10 allows for no less than ten nodes in the node pool, while a maximum of 10 allows no more than ten nodes in the node pool. The autoscaler only assigns these as limits, meaning there isn’t any additional logic to minimize or maximize resources in your cluster.
Read the guide on Cluster Autoscaling. We also have an extensive library of educational resources and check out all of our Kubernetes documentation here.
Bravo. More work on the firewall then we can move DO deployments back to Linode
Superb. There are other things that are very important when setup a cloud managed kubernetes. We are really missing a linode_lke_pool as terraform resource and also pool labels. DO have these capabilities that are really really needed. We are evaluating Linode and DO for our production deployment, we really love Linode but are tempting to go with DO because of that.