A sailing ship and an airborne whale, evocative of the branding of Kubernetes and Docker.

As software engineers we like to make computers do things by writing code. If we encounter a vexing situation, our first instinct is to automate it, ideally through the typing of esoteric sequences of text. Kubernetes is this impulse taken to its logical conclusion in the domain of application infrastructure.

If you're using a Kubernetes cluster, you'll never need to SSH into a Linux server, install dependencies, reconcile differences between servers, etc. Everything is done by writing configuration files and sending them to the Kubernetes cluster.

The "how" is the subject of the rest of this document, but first let's address the "why". If you've always managed your applications manually and it's worked for you, skepticism of learning a whole new way of doing this is definitely understandable.

Think about all the things you would need to do to manually setup an application. Let's say it's a single-page app that sits on top of some microservices, which are collected under an API gateway. To stand this up you'd need to:

  • Get all the code and/or compiled artifacts
  • Have configuration files for each service and web server
  • Make sure all those things are using the correct hosts and ports so as to communicate with each other
  • Perform sanity checks and other tests to make sure it works
  • Then there are nice-to-haves that become less nice and more mandatory as your application gets bigger: service discovery, orchestration, self-healing, and log aggregation

There's a lot of things you have to know, a lot of things you have to do manually, and in each of these is the risk that human error creeps in.

The magic of Kubernetes is that it reduces all this to code—specifically, YAML config files. You provide these configs that define what your application should look like, and the Kubernetes cluster then gives it to you. All details of implementation are taken care of by the Kubernetes software as setup by the cluster admin.

When shouldn't Kubernetes be used?

If you're

  • in a small organization
  • without already having access to a Kubernetes cluster
  • and Kubernetes through a public cloud (e.g. AWS, Azure, or GCP) isn't an option for budgetary or policy reasons

then Kubernetes is inadvisable. Setting up your own Kubernetes cluster (as opposed to using an existing one, which this document focuses on) is an intensive project, and the effort of a small engineering team is probably better directed elsewhere.

If affordability is your concern, there are infrastructure-as-a-service options (such as AWS's Amplify or ECS) that can get you some of the benefits of Kubernetes at a lower price. Note that there are ways to use Kubernetes that will result in less resource usage (and so lower billing), principally through horizontal autoscaling. But this is an intermediate concept, and is not covered in this document so as to maintain a beginner-level scope.

What this document is, and isn't

This document is geared towards application developers who already know the basics of REST, but don't necessarily have any knowledge of containerization, Docker, or Kubernetes.

This document's goal is not to provide a comprehensive reference of Docker and Kubernetes capabilities. It doesn't seek to convey all there is to know about any topic. Rather, it provides a general walkthrough that will empower beginners to do useful things with Kubernetes, including, by the end, deploy and maintain whole applications. It seeks to be concise, and omits rabbit holes of unnecessary detail that would not be helpful to the target audience at this stage in their journey. It still gives details when these are needed to create a foundation of best practices and enable readers to immediately contribute to their projects.

The examples used in this document refer to technologies like nginx and Redis. You don't need to be fluent in these, but a brief overview of them may be helpful to your understanding:

  • nginx is a web server that can serve static content (e.g. HTML pages) and can also proxy to other HTTP services. By default, if you make a request to an nginx server's port 80, it'll return a "Welcome to nginx" page, which the examples here will make frequent use of.
  • redis is an in-memory database that stores data in key-value pairs. No knowledge of how to insert or retrieve data in Redis is necessary, as the example applications are already setup to do this.