Supporting Taints in LKE Node Pools?

The managed node pools feature of LKE is a nice feature. I can use node selectors to target nodes for workloads, however, I would like to add taints to nodes in certain node pools to segment workloads using the taints/tolerations mechanism in Kubernetes.

This is a more flexible approach that doesn't require pinning to specific attributes of nodes. Today I see labels like and which can be used to target based on the name of the pool or type of instance. If I want to segment workloads I have to know the dynamically-generated pool ID which is a bit of a pain to dig up.

In short, I feel that taints would be a more flexible option to allow clusters the flexibility we are used to in k8s.

Feature Request

When creating a node pool, I would like to be able to specify a list of taints to apply to nodes in the pool. These taints would automatically be added to the nodes, and would prevent workloads without the corresponding toleration from running on them.

I'd expect this to be added to the Linode API LKE Pools endpoints, as well as the Terraform provider.

Another option would be to allow for some additional Kubelet args which include things like taints, but this would be a pretty dirty API.

Examples in the Wild

The following are good examples of this kind of node-level customization. Additional labels, or naming of pools could also make it even easier to assign workloads to nodes.

Any plans to add this feature?

4 Replies

Linode Staff

Great idea! Our LKE team does plan to add support for taints at the node level, and while we don't have an exact time frame as to when this would be implemented, it's definitely something we're planning. I also let the LKE team know about your thoughts on this so they know what type of implementation would be most useful to you.

We'd also be interested in getting something like this. Particularly to implement keeping nodes "not ready" until certain daemonsets have started up.

I am also exteremely interested in this feature at the pool level (or nodelevel PROVIDED the taint sticks around after a recycle operation)

Is there any timeline?

Oh and for anyone curious, you can taint a node with: kubectl taint node lke* dedicated=special-web-app:NoSchedule currently


  • It won't persist through a recycle (thats linode)
  • It won't move existing pods (thats k8s)

I'm looking to experiment with using metacontroller to auto-taint nodes for a "somewhat restricted" node environment (metacontroller should taint the node before the scheduler is ready - but it's not guarunteed)


Please enter an answer

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct