The Linode Kubernetes module for Terraform

This post will cover some of the decisions and features of the Linode Kubernetes Terraform module that provisions Kubernetes on Linodes. The module can be used directly through Terraform or indirectly, via the Linode CLI.

Kismet: Terraform and Kubernetes

In October 2018, the release of the Linode Terraform Provider was imminent. The newly formed 3rd Party Tools (3PT) and Kubernetes teams sought the means of testing the Terraform Provider in real world use-cases. The teams were collectively interested in demonstrating the extensibility of Terraform and providing an official base for Linode Kubernetes users.

The pieces of this puzzle (Terraform, CSI, CCM, ExternalDNS, LinodeCLI plugins) were being put together separately and the timing worked out that they could all fit together in time for KubeCon.

KubeCon 2018 and the Linode CLI

At the Linode booth at KubeCon, the latest addition to the Linode CLI was unveiled. The linode-cli k8s-alpha sub-command creates a Kubernetes cluster on Linode infrastructure, tailored for Linode infrastructure.

pip install linode-cli
linode-cli k8s-alpha create mycluster

The cluster comes equipped with controllers pre-configured to create NodeBalancers, Block Storage, and Linode Domain records as needed and defined in cluster resources. Note the "alpha" in the command name. These clusters make a useful playground, but will require more hardening and tooling for use in production.

A Terraform Module

Terraform modules allow for reuse of Terraform configurations. The outputs of an independently maintained module can expose the outputs of child modules and can then be consumed by new configurations and modules.

Any Terraform configuration can be turned into a module. There are some common practices to keep the module truly reusable, this includes opinionated file names and path structures. Once created, a module can easily be published to the Terraform Module Registry.

Using a Terraform module is also straight forward.

module "k8s" {
  source  = "linode/k8s/linode"
  version = "0.0.6"

  linode_token = "${var.linode_token}"
}

provider "kubernetes" {
  config_path = "${module.linode_k8s.kubectl_config}"
}

With these two stanzas, a Kubernetes cluster will be provisioned on Linode infrastructure, and the Kubernetes certificate configuration will initialize the Terraform Kubernetes provider. The Kubernetes provider then offers unique capabilities geared toward external state management of the cluster.

Linode, making modules

Before making Linode's first module, the team started with a NodeBalanced Nginx / MySQL / WordPress demo configuration. This thorough configuration verified the cross-resource functionality of Instances, NodeBalancers, StackScripts, Domains, Images, and more. While this certainly ran the Linode Provider through the paces, it is perhaps too opinionated for a general purpose reusable module. With this experience, we were prepared to attempt something with broader appeal.

While working on Terraform, the team was also hard at work on the Linode Kubernetes CSI module and the Linode Kubernetes CCM. A very practical test of both the new Terraform provider and these modules would be to bring up a Kubernetes environment where the CSI (Container Storage Interface) and the CCM (Cloud Controller Manager) addons could be tested. Having a quick way to spin up and bring down a Linode integrated Kubernetes environment would continue to be useful long after the testing phases.

More details about the Linode Kubernetes Addons can be found in the project's README.md under "Addons Included".

Design Goals

In a base Kubernetes module, we wanted something that was, or could become, modular. We wanted something that spun up clusters very quickly. We wanted something that was easy to understand and easy for others to extend. We wanted something light on opinions because we had enough opinions of our own, including the default choice in Persistent Volume StorageClass, by means of the CSI.

Choice of forks

There is no shortage of Kubernetes modules and configurations available on Github. Each of the repositories we considered came with their own opinions on what makes for a good starting point for a cluster.

Many of these repositories depended on Terraform to provision the infrastructure, and Kubeadm to provision the Kubernetes cluster from the otherwise functional Linux nodes.

Operating System

Fedora, Ubuntu, CentOS, and CoreOS are all popular choices for Kubernetes environments. Linode supports images for all of these distributions, but we ultimately chose CoreOS Container Linux for its readiness to launch Kubernetes out of the box and for the design focus around containers.

Networking

Linode's Private 192.168.128.0/17 network, shared within each zone, offers an unmetered back-plane for node communication. This has obvious advantages for what will likely be a noisy back-plane.

Another advantage of the private network, is that Linode NodeBalancers can deliver public traffic to Linodes over this private IP range. One of the features of a Kubernetes CCM is to configure NodeBalancers to deliver traffic to the appropriate Kubernetes node ports for services. The Linode Kubernetes Terraform module would definitely want to include support for this.

There are many more networking decisions that were made, revisited, and will be revisited again. So much so that networking choices (including CNI choices and kube-proxy options) will need to be the topic of another article.

Launch Branch

At the 2018 Kubernetes and Cloud Native Computing Foundation Conference, we finally unveiled an enhancement to the Linode CLI which deployed the Linode Kubernetes Terraform module and all the modules we set out to introduce to customers. In the few months since this was released we've taken feedback that will improve this Linode Kubernetes experience and others to come.

Usage

These instructions are condensed from the project README.md. Be sure to install the recommended tools, namely Terraform, kubectl, and configure your SSH Agent for use with ~/.ssh/is_rsa.

Using the Linode CLI

The linode-cli command simplifies the Kubernetes cluster install. When using linode-cli k8s-alpha create mycluster, a Terraform configuration file is managed behind the scenes. Skip ahead to the demonstration section.

Using the Terraform Module directly

To provision Kubernetes on Linode using the module directly, create a main.tf file in a new directory with the following contents, including a Linode API Token.

module "k8s" {
  source  = "linode/k8s/linode"
  linode_token = "YOUR TOKEN HERE"
}

That's all it takes to get started!

Choose a Terraform workspace name (the default is default). In this example we'll use linode. The workspace name will be used as a prefix for Linode resources created in this cluster, for example: linode-master-1, linode-node-1. Alternate workspaces can be created and selected to change clusters.

terraform workspace new linode

Create an Linode Kubernetes cluster with one master and a node:

terraform init
terraform apply \
 -var region=eu-west \
 -var server_type_master=g6-standard-2 \
 -var nodes=1 \
 -var server_type_node=g6-standard-2

This will do the following:

  • provisions Linode Instances in parallel with CoreOS ContainerLinux
  • connects to the Linode Instances via SSH and installs kubeadm, kubectl, and other Kubernetes binaries to /opt/bin
  • installs the Calico CNI between Linode Instances
  • runs kubeadm init on the master server and configures kubectl
  • joins the nodes in the cluster using the kubeadm token obtained from the master
  • installs Linode add-ons: CSI (LinodeBlock Storage Volumes), CCM (Linode NodeBalancers), External-DNS (Linode Domains)
  • installs cluster add-ons: Kubernetes dashboard, metrics server and Heapster
  • copies the kubectl admin config file for local kubectl use via the public IP of the API server

A full list of the supported variables are available in the Terraform Module Registry.

After applying the Terraform plan, run terraform output --module=k8s to see output variables like the master and node public IP addresses, the kubeadm join command, and the current workspace admin config file (for use with kubectl).

$ ls
linode.conf          main.tf             terraform.tfstate.d

To use this certificate config file, put it in the KUBECONFIG environment variable (assuming the workspace name was linode):

$ export KUBECONFIG=$(pwd)/linode.conf

$ kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
linode-master-1   Ready    master   14m   v1.13.0
linode-node-1     Ready    <none>   12m   v1.13.0

This cluster is now ready for the demonstration section.

Scaling and Destroying

The cluster node count can be scaled up by increasing the number of Linode Instances acting as nodes:

terraform apply -var nodes=3

Tear down the whole infrastructure with:

terraform destroy -force

When destroying the cluster, be sure to clean-up any CSI created Block Storage Volumes, and CCM created NodeBalancers that are no longer required.

Demonstration, Please

To do something interesting with this cluster, start by installing helm locally, and then configuring the cluster to work with Helm and Tiller.

kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller

Now the cluster is ready to consume helm charts.

Jupyter Notebooks are a convenient way to collaborate and remix Python code and data. JupyterHub provides a multi-user experience for Jupyter Notebooks.

Install JupyterHub in the cluster using the following commands:

echo -e "proxy:
  secretToken: $(openssl rand -hex 32)
hub:
  db:
    pvc:
     storage: 10737418240" | tee config.yaml

helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update

helm upgrade --install jhub jupyterhub/jupyterhub \
  --namespace jhub  \
  --version 0.7.0 \
  --values config.yaml

Wait for the service to become available:

kubectl get service --namespace jhub

Load the EXTERNAL-IP in your browser and authenticate with any username and password (guest/guest for example).

When finished with the demo, remove JupyterHub with:

helm delete --purge jhub
kubectl delete namespace jhub

Limitations and Future Enhancements

  • Upgrading - Future enhancements will include a null-resource provisioner that will attempt to upsert a specified Kubernetes version into the configuration in such a way that an upgrade will be attempted if a cluster has already been created.

  • Firewall - Restricting access to the Kubernetes nodes, via iptables is an area of interest, especially because users may not be aware of the openness of this network. There are questions to resolve about what to restrict and when. Should Terraform manage node-to-node address rules on each node with a default DROP policy, revising the rules on every node as nodes are added and removed from the cluster? Should the CCM regulate all ports that are not defined by known Kubernetes services?

  • Backups - Etcd data, which is the datastore of the Kubernetes API, is the life blood of the cluster. This data must be snapshotted regularly and securely for restoration to be possible.

  • Updates - While CoreOS will attempt to apply updates on a regular interval, it is possible that all of the nodes will attempt to upgrade at the same time. For this reason, the alpha version of the module has disabled this feature. A CoreOS Update Operator can schedule updates and reboots to prevent cluster downtime.

Credits

  • Ciaran Liedeman for work on the Terraform Linode K8s module and External-DNS support
  • Stefan Prodan for previous work on the Terraform K8s Module

1 Reply

I have created k8s cluster on Linode using the following command.

linode-cli k8s-alpha create example-cluster --nodes=2

And then scaled up using the following steps.
cd .k8s-alpha-linode/k8s-cluster
terraform apply \
-var region=ap-south \
-var server_type_master=g6-standard-2 \
-var nodes=3 \
-var server_type_node=g6-standard-2

Thanks

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct