Supporting Taints in LKE Node Pools?

The managed node pools feature of LKE is a nice feature. I can use node selectors to target nodes for workloads, however, I would like to add taints to nodes in certain node pools to segment workloads using the taints/tolerations mechanism in Kubernetes.

This is a more flexible approach that doesn't require pinning to specific attributes of nodes. Today I see labels like lke.linode.com/pool-id and beta.kubernetes.io/instance-type which can be used to target based on the name of the pool or type of instance. If I want to segment workloads I have to know the dynamically-generated pool ID which is a bit of a pain to dig up.

In short, I feel that taints would be a more flexible option to allow clusters the flexibility we are used to in k8s.

Feature Request

When creating a node pool, I would like to be able to specify a list of taints to apply to nodes in the pool. These taints would automatically be added to the nodes, and would prevent workloads without the corresponding toleration from running on them.

I'd expect this to be added to the Linode API LKE Pools endpoints, as well as the Terraform provider.

Another option would be to allow for some additional Kubelet args which include things like taints, but this would be a pretty dirty API.

Examples in the Wild

The following are good examples of this kind of node-level customization. Additional labels, or naming of pools could also make it even easier to assign workloads to nodes.

Any plans to add this feature?

8 Replies

Great idea! Our LKE team does plan to add support for taints at the node level, and while we don't have an exact time frame as to when this would be implemented, it's definitely something we're planning. I also let the LKE team know about your thoughts on this so they know what type of implementation would be most useful to you.

We'd also be interested in getting something like this. Particularly to implement keeping nodes "not ready" until certain daemonsets have started up. https://github.com/kubernetes/kubernetes/issues/75890#issuecomment-725792993

I am also exteremely interested in this feature at the pool level (or nodelevel PROVIDED the taint sticks around after a recycle operation)

Is there any timeline?

Oh and for anyone curious, you can taint a node with: kubectl taint node lke* dedicated=special-web-app:NoSchedule currently

However:

  • It won't persist through a recycle (thats linode)
  • It won't move existing pods (thats k8s)

I'm looking to experiment with using metacontroller to auto-taint nodes for a "somewhat restricted" node environment (metacontroller should taint the node before the scheduler is ready - but it's not guarunteed)

I think this is a must-have feature. Why isn't she still there? We thought about migrating from AWS to linode, but it seems that even the basic functionality is not complete.
Are you planning to add this to panel and terraform?

very disappointing that this isn't available - and how hard could this be to implement. This is forcing me off of Linode. Not being able to auto-taint nodes makes it pretty much impossible to build workflows that require horizontal scaling in a cost-effective fashion

We appreciate hearing from our customers on how our platform can be improved including this feedback regarding a taint feature for LKE. I went ahead and added your requests for this feature to our internal tracking. There's no timeline or ETA currently for LKE taint support, however, keep an eye on our blog for all updates and announcements regarding our services.

looks like people have to move away their Kubernetes clusters from linode due to many limitations, like missing custom Node pool labels and taints

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct