Building a CD Pipeline Using LKE (Part 10): Installing metrics-server

Traducciones al Español
Estamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
Create a Linode account to try this guide with a $ credit.
This credit will be applied to any valid services used during your first  days.

Watch the Presentation: Register to watch this workshop, free of charge.

Slide deck: Cloud Native Continuous Deployment with GitLab, Helm, and Linode Kubernetes Engine: Installing metrics-server (Slide #152)

Note
Linode has updated various Helm commands in this guide since the above slide deck’s original publication. Commands may differ between those in this guide and the slide deck.

Installing metrics-server

Now that there is an application running on our Kubernetes cluster, the next step is to collect metrics on the resources being used. This part covers installing and using metrics-server as a basic data collection tool.

Presentation Text

Here’s a copy of the text contained within this section of the presentation. A link to the source file can be found within each slide of the presentation. Some formatting may have been changed.

Installing metrics-server

  • We’ve installed a few things on our cluster so far
  • How much resources (CPU, RAM) are we using?
  • We need metrics!
  • If metrics-server is installed, we can get Nodes metrics like this: kubectl top nodes
  • At the moment, this should show us error: Metrics API not available
  • How do we fix this?

Many ways to get metrics

  • We could use a SAAS like Datadog, New Relic…
  • We could use a self-hosted solution like Prometheus
  • Or we could use metrics-server
  • What’s special about metrics-server?

Pros/cons

  • Cons:
    • no data retention (no history data, just instant numbers)
    • only CPU and RAM of nodes and pods (no disk or network usage or I/O…)
  • Pros:
    • very lightweight
    • doesn’t require storage
    • used by Kubernetes autoscaling

Why metrics-server

  • We may install something fancier later (think: Prometheus with Grafana)
  • But metrics-server will work in minutes
  • It will barely use resources on our cluster
  • It’s required for autoscaling anyway

How metric-server works

  • It runs a single Pod
  • That Pod will fetch metrics from all our Nodes
  • It will expose them through the Kubernetes API aggregation layer (we won’t say much more about that aggregation layer; that’s fairly advanced stuff!)

Installing metrics-server

  • In a lot of places, this is done with a little bit of custom YAML (derived from the official installation instructions)

  • We’re going to use Helm one more time:

    helm upgrade --install metrics-server metrics-server/metrics-server \
      --create-namespace --namespace metrics-server \
      --set apiService.create=true \
      --set "args={--kubelet-insecure-tls=true,--kubelet-preferred-address-types=InternalIP}"
    
  • What are these options for?

Note

Per the instructions provided by ArtifactHub, you may need to add the metrics-server repository to Helm prior to installation:

helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/

Installation options

  • apiService.create=true

    register metrics-server with the Kubernetes aggregation layer

    (create an entry that will show up in kubectl get apiservices)

  • kubelet-insecure-tls=true

    when connecting to nodes to collect their metrics, don’t check kubelet TLS certs

    (because most kubelet certs include the node name, but not its IP address)

  • kubelet-preferred-address-types=InternalIP

    when connecting to nodes, use their internal IP address instead of node name

    (because the latter requires an internal DNS, which is rarely configured)

Testing metrics-server

  • After a minute or two, metrics-server should be up
  • We should now be able to check Nodes resource usage: kubectl top nodes
  • And Pods resource usage, too: kubectl top pods --all-namespaces

Keep some padding

  • The RAM usage that we see should correspond more or less to the Resident Set Size
  • Our pods also need some extra space for buffers, caches…
  • Do not aim for 100% memory usage!
  • Some more realistic targets:
    • 50% (for workloads with disk I/O and leveraging caching)
    • 90% (on very big nodes with mostly CPU-bound workloads)
    • 75% (anywhere in between!)

This page was originally published on


Your Feedback Is Important

Let us know if this guide was helpful to you.


Join the conversation.
Read other comments or post your own below. Comments must be respectful, constructive, and relevant to the topic of the guide. Do not post external links or advertisements. Before posting, consider if your comment would be better addressed by contacting our Support team or asking on our Community Site.
The Disqus commenting system for Linode Docs requires the acceptance of Functional Cookies, which allow us to analyze site usage so we can measure and improve performance. To view and create comments for this article, please update your Cookie Preferences on this website and refresh this web page. Please note: You must have JavaScript enabled in your browser.