Skip to main content

Docs as Code at Linode

docs-as-code-at-linode

Hi! My name is Nathan, and I’m a member of the technical writing team at Linode. One of our primary jobs is to contribute to and maintain the library of guides and tutorials at https://www.linode.com/docs, where over 1,400 articles are published. In order to uphold the quality of the content, and to encourage collaboration in the writing process, we have adopted a docs as code methodology for our authoring workflow.

Docs as code is a methodology where the tools you use for writing documentation are the same as the tools used for writing software. This includes:

  • Using version control software
  • Writing plain-text documentation files
  • Running automated tests
  • Practicing continuous integration and continuous delivery

Linode’s technical writing team has also extended this methodology by taking responsibility for the cloud infrastructure that hosts our docs website.

Why Docs as Code?

Following these practices offers a range of benefits:

  • Working with these technologies helps form a tighter bond between the technical writing team and Linode’s development and engineering teams.
  • Other teams at Linode can contribute to various aspects of our process. For example: the front-end team can help update the theming/presentation for the docs website using the tools they already work with, and a member of the engineering team might write new automated tests for the docs library.
  • Our team frequently writes guides and tutorials about the technologies used in our process. Implementing these for ourselves gives us a better understanding of them, which improves our ability to explain them.
  • We also need to document Linode’s products accurately, which can involve reviewing Linode’s codebase. Fluency in the languages and tools used for these projects can help us better understand them.

Our Implementation

Because implementations of this methodology can vary between different organizations, I’d like to offer a detailed outline of our process:

  1. Authoring: Our technical writers author new guides in Markdown. Markdown is used to represent rich text in a plain-text file, and it can be compiled into HTML by a range of tools. Markdown has near-universal adoption in software development; for example, GitHub README files are written in Markdown. Writing in Markdown also means that you can use any plain-text editor you prefer, from modern desktop editors like Visual Studio Code, to Emacs and Vim. We know that people have strong opinions about their preferred text editors, and this flexibility helps more people contribute to our library.
  2. Local Site Previews: The Markdown files in our library are compiled into their final HTML representation with a static site generator. Static sites are collections of pre-built pages that do not rely on a database to be rendered when they are requested (as is the case with a CMS like WordPress). Because of this, a static site is very quick to load. A static site generator renders a static site by combining your content files (e.g. Markdown) with a theme that you specify.

    Linode’s docs website uses Hugo, which is one of the most popular static site generators:
    • Hugo offers well-documented installation methods on lots of operations systems, so there are no issues when onboarding a new employee. 
    • Hugo includes a local development web server, so authors can render the site on their computers while writing new guides. This server will also live-reload whenever the author saves their file.
    • Hugo is extremely fast, which is a meaningful factor for a library of our size–my Macbook Pro is able to compile all 1400 guides in approximately 3 seconds, and the live-reloading function is nearly instantaneous. 
    • Hugo’s shortcodes also help us enhance our guides with features that Markdown doesn’t afford, including highlighted notes and line-numbered code snippets.

  3. Collaboration: The docs library is stored in a public Git repository hosted on GitHub, and each author maintains a fork of this repository. When an author is finished drafting a guide, they commit their changes, push the changes up to their fork, and then open a pull request against the main repository.

    Our writing process has two separate team members perform a technical edit and then a copy edit on any new guides or guide updates. These team members will download the pull request’s branch, make and commit their changes, and then push them back up to the branch on GitHub. Git is another near-universal tool for development, and using it allows us to leverage some standard best practices for collaborating, like the gitflow workflow.

    As well, GitHub’s popularity means that it has a large number of useful integrations. In particular, we use Netlify to generate automatic, publicly-accessible previews for each pull request. We often need to ask for feedback from Linodians in different departments, and they can view these shareable Netlify links without having to clone our docs repository and install Hugo on their laptops.

    Lastly, hosting in a public repository opens our library to outside contributors, which we welcome.
  4. Testing: Whenever a pull request is submitted or updated, a series of tests are run on the PR’s content. These tests are run with Travis CI, which is a continuous integration service that also offers GitHub integration. If any of these tests fail, then the pull request is temporarily blocked from being merged:
    • Guide content is checked for spelling and for style (e.g. proper capitalization of technical terms). We use Vale to perform these tasks, which is an open-source linter that’s designed to work on prose. When we first integrated Vale, it reported over 500,000 spelling errors in our library. While a bit embarrassing, knowing this number and then being able to act on it gave us a big boost in confidence in our content and in our new publishing system.
    • We check for potential broken links between guides by using Scrapy, an open source Python framework that scrapes content from websites. This test was written in collaboration with a member of Linode’s engineering team. When first implementing Scrapy, we similarly found a number of broken links that we could correct.
    • Another Python script checks that the front matter metadata for guides is valid and free of syntax errors. A broken front-matter section can cause issues when building the site, so having this validated means that we can be sure the site will render when updating the production web servers.
  5. Publishing and Hosting: Updating the docs website is handled automatically by a collection of scripts that are triggered from certain events on GitHub:
    • Whenever content is merged into the master branch of the main docs repository, a webhook notification is sent to a staging web server, which is a Linode. This staging web server then pulls the master branch from GitHub and builds the site with Hugo, with the web server’s document root as the target for the rendered site. We view this staging site and confirm that the content appears as expected.

      The staging server wasn’t initially a part of our workflow; it was built out after an incident temporarily broke our CSS/styling during a production site update. In short, Netlify had correctly rendered a preview the new site release, but it failed to catch a styling issue. This was because it used our ‘development’ build pipeline, instead of our ‘production’ build pipeline (which minifies our CSS and other assets). The new staging server is set up with the same configuration as our production server, so it also uses the production build pipeline, and it will catch errors like this.
    • To update the production site, we create a new release tag on GitHub. This triggers another webhook notification that’s sent to the production web server. This server runs a script similar to the staging server, but it pulls down the content from the new release tag instead.
    • Having our publishing function happen automatically minimizes any human error that might occur if we were to manually perform this process. The staging and production servers are both under configuration management through Salt formulas, which also minimizes human error when they need any software maintenance or updates. Salt is used by other infrastructure projects at Linode too, so our docs web servers can be managed alongside other parts of the fleet.

Adopting this methodology has helped us greatly streamline our workflow, but we are always working to iterate and improve on it. If you have any suggestions for updates we can make, let us know! I’m on the Write the Docs Slack as nmelehan, as are several other team members. If you’d like to read more about docs as code, I’d recommend Eric Holscher’s guide on the Write the Docs website.

Comments (3)

  1. Ken

    What engine do you use for allowing users to search through your documents from the https://www.linode.com/docs/ page?

    • Nathan Melehan

      Hi Ken –

      We use Algolia. We update our index in Algolia from the scripts that we run in step 5 (publishing and hosting). To update that index, we first have Hugo output a list of our guides to a JSON file (see Hugo – Custom Output Formats), and then we ship that data to Algolia’s API.

  2. James S

    Excellent write-up, Nathan. We’re using a similar Docs as Code approach for our documentation for Tugboat at https://docs.tugboat.qa. Our GitHub repository is at https://github.com/TugboatQA/docs. Instead of Netlify, we’re using Tugboat itself to build the previews of the site. That way, anyone can create a pull request with a fix and preview that change right away.

    We’re huge fans of Linode (our hosting provider since day 1!), so it’s nice to see you all validating our approach as well!

    Thanks for sharing all the detail into your approach. Very helpful.

Leave a Reply

Your email address will not be published. Required fields are marked *