Deploy and Manage K3s (a Lightweight Kubernetes Distribution)

Traducciones al Español
Estamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
Create a Linode account to try this guide with a $ credit.
This credit will be applied to any valid services used during your first  days.

K3s is a lightweight, easy-to-install Kubernetes distribution. Built for the edge, K3s includes an embedded SQLite database as the default datastore and supports external datastore such as PostgreSQL, MySQL, and etcd. K3s includes a command line cluster controller, a local storage provider, a service load balancer, a Helm controller, and the Traefik ingress controller. It also automates and manages complex cluster operations such as distributing certificates. With K3s, you can run a highly available, certified Kubernetes distribution designed for production workloads on resource-light machines like 1GB Linodes (Nanodes).

Note
  • While you can deploy a K3s cluster on just about any flavor of Linux, K3s is officially tested on Ubuntu 16.04 and Ubuntu 18.04. If you are deploying K3s on CentOS where SELinux is enabled by default, then you must ensure that proper SELinux policies are installed. For more information, see Rancher’s documentation on SELinux support.
  • 1GB Linode (Nanode) instances are suitable for low-duty workloads where performance isn’t critical. Depending on your requirements, you can choose to use Linodes with greater resources for your K3s cluster.

Before You Begin

  1. Familiarize yourself with our Getting Started guide.

  2. Create two Linodes in the same region that are running Ubuntu 18.04.

  3. Complete the steps for setting the hostname and timezone for both Linodes. When setting hostnames, it may be helpful to identify one Linode as a server and the other as an agent.

  4. Follow our Securing Your Server guide to create a standard user account, harden SSH access, and create firewall rules to allow all outgoing traffic and deny all incoming traffic except SSH traffic on both Linodes.

    Note

    This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, visit our Users and Groups guide.

    All configuration files should be edited with elevated privileges. Remember to include sudo before running your text editor.

  5. Ensure that your Linodes are up to date:

    sudo apt update && sudo apt upgrade

Install K3s Server

First, you will install the K3s server on a Linode, from which you will manage your K3s cluster.

  1. Connect to the Linode where you want to install the K3s server.

  2. Open port 6443/tcp on your firewall to make it accessible by other nodes in your cluster:

    sudo ufw allow 6443/tcp
  3. Open port 8472/udp on your firewall to enable Flannel VXLAN:

    Note

    Replace 192.0.2.1 with the IP address of your K3s Agent Linode.

    As detailed in Rancher’s Installation Requirements, port 8472 should not be accessible outside of your cluster for security reasons.

    sudo ufw allow from 192.0.2.1 to any port 8472 proto udp
  4. (Optional) Open port 10250/tcp on your firewall to utilize the metrics server:

    sudo ufw allow 10250/tcp
  5. Set environment variables used for installing the K3s server:

    export K3S_KUBECONFIG_MODE="644"
    export K3S_NODE_NAME="k3s-server-1"
  6. Execute the following command to install K3s server:

    curl -sfL https://get.k3s.io | sh -
  7. Verify the status of the K3s server:

    sudo systemctl status k3s
  8. Retrieve the access token to connect a K3s Agent Linode to your K3s Server Linode:

    sudo cat /var/lib/rancher/k3s/server/node-token

    The expected output is similar to:

    abcdefABCDEF0123456789::server:abcdefABCDEF0123456789
  9. Copy the access token and save it in a secure location.

Install K3s Agent

Next you will install the K3s agent on a Linode.

  1. Connect to the Linode where you want to install the K3s agent.

  2. Open port 8472/udp on your firewall to enable Flannel VXLAN:

    Note

    Replace 192.0.2.0 with the IP address of your K3s Server Linode.

    As detailed in Rancher’s Installation Requirements, port 8472 should not be accessible outside of your cluster for security reasons.

    sudo ufw allow from 192.0.2.0 to any port 8472 proto udp
  3. (Optional) Open port 10250 on your firewall to utilize the metrics server:

    sudo ufw allow 10250/tcp
  4. Set environment variables used for installing the K3s agent:

    Note
    Replace 192.0.2.0 with the IP address of your K3s Server Linode and abcdefABCDEF0123456789::server:abcdefABCDEF0123456789 with the its access token.
    export K3S_KUBECONFIG_MODE="644"
    export K3S_NODE_NAME="k3s-agent-1"
    export K3S_URL="https://192.0.2.0:6443"
    export K3S_TOKEN="abcdefABCDEF0123456789::server:abcdefABCDEF0123456789"
  5. Execute the following command to install a K3s server:

    curl -sfL https://get.k3s.io | sh -
  6. Verify the status of the K3s agent:

    sudo systemctl status k3s-agent

Manage K3s

Your K3s installation includes kubectl, a command-line interface for managing Kubernetes clusters.

From your K3s Server Linode, use kubectl to get the details of the nodes in your K3s cluster.

kubectl get nodes

The expected output is similar to:

NAME           STATUS   ROLES    AGE   VERSION
k3s-server-1   Ready    master   95s   v1.18.2+k3s1
k3s-agent-1    Ready    <none>   21s   v1.18.2+k3s1
Note
To manage K3s from outside the cluster, copy the contents of /etc/rancher/k3s/k3s.yaml from your K3s Server Linode to ~/.kube/config on an external machine where you have installed kubectl, replacing 127.0.0.1 with the IP address of your K3s Server Linode.

Test K3s

Here, you will test your K3s cluster with a simple NGINX website deployment.

  1. On your K3s Server Linode, create a manifest file labeled nginx.yaml, open it with a text editor, and add the following text that describes a single-instance deployment of NGINX that is exposed to the public using a K3s service load balancer:

    File: nginx.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:latest
            ports:
            - containerPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      ports:
        - protocol: TCP
          port: 8081
          targetPort: 80
      selector:
        app: nginx
      type: LoadBalancer
  2. Save and close the nginx.yaml file.

  3. Deploy the NGINX website on your K3s cluster:

    kubectl apply -f ./nginx.yaml

    The expected output is similar to:

    deployment.apps/nginx created
    service/nginx created
  4. Verify that the pods are running:

    kubectl get pods

    The expected output is similar to:

    NAME                    READY   STATUS    RESTARTS   AGE
    svclb-nginx-c6rvg       1/1     Running   0          21s
    svclb-nginx-742gb       1/1     Running   0          21s
    nginx-cc7df4f8f-2q7vf   1/1     Running   0          22s
  5. Verify that your deployment is ready:

    kubectl get deployments

    The expected output is similar to:

    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx   1/1     1            1           57s
  6. Verify that the load balancer service is running:

    kubectl get services nginx

    The expected output is similar to:

    NAME       TYPE           CLUSTER-IP    EXTERNAL-IP       PORT(S)          AGE
    nginx      LoadBalancer   10.0.0.89     192.0.2.1         8081:31809/TCP   33m
  7. In a web browser navigation bar, type the IP address listed under EXTERNAL_IP from your output and append the port number :8081 to reach the default NGINX welcome page.

  8. Delete your test NGINX deployment:

    kubectl delete -f ./nginx.yaml

Tear Down K3s

To uninstall your K3s cluster:

  1. Connect to your K3s Agent Linode and run the following commands:

    sudo /usr/local/bin/k3s-agent-uninstall.sh
    sudo rm -rf /var/lib/rancher
  2. Connect to your K3s Server Linode and run the following commands:

    sudo /usr/local/bin/k3s-uninstall.sh
    sudo rm -rf /var/lib/rancher

More Information

You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

This page was originally published on


Your Feedback Is Important

Let us know if this guide was helpful to you.


Join the conversation.
Read other comments or post your own below. Comments must be respectful, constructive, and relevant to the topic of the guide. Do not post external links or advertisements. Before posting, consider if your comment would be better addressed by contacting our Support team or asking on our Community Site.
The Disqus commenting system for Linode Docs requires the acceptance of Functional Cookies, which allow us to analyze site usage so we can measure and improve performance. To view and create comments for this article, please update your Cookie Preferences on this website and refresh this web page. Please note: You must have JavaScript enabled in your browser.