What is Kubernetes? An Introductory Overview
In part 1 of my webinar series on Kubernetes, I introduced Kubernetes at a high level with hands-on demos aiming to answer the question, “What is Kubernetes?” After polling our audience, we found that most of the webinar attendees had never used Kubernetes before, or had only been exposed to it through demos. This post is meant to complement the session with more introductory-level information about Kubernetes.
Containers have been helping teams of all sizes to solve issues with consistency, scalability, and security. Using containers, such as Docker, allows you to separate the application from the underlying infrastructure. Gaining that separation requires some new tools in order to get the most value out of containers, and one of the most popular tools used for container management and orchestration is Kubernetes. It is an open-source container orchestration tool designed to automate deploying, scaling, and operating containerised applications.
Kubernetes was born from Google’s 15-year experience running production workloads. It is designed to grow from tens, thousands, or even millions of containers. Kubernetes is container-runtime agnostic. You actually use Kubernetes to manage Rocket containers today. This article covers the basics about Kubernetes, but you can deep dive into the tool with QA’s Intro to Kubernetes Learning Path.
What can Kubernetes do?
What can Kubernetes do?
Kubernetes’ features provide everything you need to deploy containerised applications. Here are the highlights:
-
Container Deployments and Rollout Control: Describe your containers and how many you want with a “Deployment.” Kubernetes will keep those containers running and handle deploying changes (such as updating the image or changing environment variables) with a “rollout.” You can pause, resume, and rollback changes as you like.
-
Resource Bin Packing: You can declare minimum and maximum compute resources (CPU and memory) for your containers. Kubernetes will slot your containers into wherever they fit. This increases your compute efficiency and ultimately lowers costs.
-
Built-in Service Discovery and Autoscaling: Kubernetes can automatically expose your containers to the internet or other containers in the cluster. It automatically load balances traffic across matching containers. Kubernetes supports service discovery via environment variables and DNS, out of the box. You can also configure CPU-based autoscaling for containers for increased resource utilisation.
-
Heterogeneous Clusters: Kubernetes runs anywhere. You can build your Kubernetes cluster for a mix of virtual machines (VMs) running in the cloud, on-premises, or bare metal in your datacentre. Simply choose the composition according to your requirements.
-
Persistent Storage: Kubernetes includes support for persistent storage connected to stateless application containers. There is support for Amazon Web Services EBS, Google Cloud Platform persistent disks, and many, many more.
-
High Availability Features: Kubernetes is planet scale. This requires special attention to high-availability features such as multi-master or cluster federation. Cluster federation allows linking clusters together so that if one cluster goes down, containers can automatically move to another cluster.
These key features make Kubernetes well suited for running different application architectures from monolithic web applications, to highly distributed microservice applications, and even batch-driven applications.
How does Kubernetes compare to other tools?
Container orchestration is a deservedly popular trend in cloud computing. At the beginning, the industry focused on pushing container adoption, advancing next to deploying containers in production at scale. There are many useful tools in this area. To learn about some of the other tools in this space, we will explore a few of them by comparing their features to Kubernetes.
The key players here are Apache Mesos/DCOS, Amazon’s ECS, and Docker’s Swarm Mode. Each has its own niche and unique strengths.
-
DCOS (or DataCenter OS) is similar to Kubernetes in many ways. DCOS pools compute resources into a uniform task pool. The big difference is that DCOS targets many different types of workloads, including but not limited to, containerised applications. This makes DCOS attractive for organisations that are not using containers for all of their applications. DCOS also includes a kind of package manager to easily deploy systems like Kafka or Spark. You can even run Kubernetes on DCOS given its flexibility for different types of workloads.
-
ECS is AWS’s entry in container orchestration. ECS allows you to create pools of EC2 instances and uses API calls to orchestrate containers across them. It’s only available inside AWS and is less feature complete compared to open source solutions. It may be useful for those deep into the AWS ecosystem.
-
Docker’s Swarm Mode is the official orchestration tool from Docker Inc. Swarm Mode builds a cluster from multiple Docker hosts. It offers similar features compared to Kubernetes or DCOS with one notable exception. Swarm Mode is the only tool to work natively with the Docker command. This means that associated tools like docker-compose can target Swarm Mode clusters without any changes.
Here are my general recommendations:
- Use Kubernetes if you’re only working with containerised applications that may or may not be only Docker.
- If you have a mix of container and non-containerised applications, use DCOS.
- Use ECS if you enjoy AWS products and first-party integrations.
- If you want a first-party solution or direct integration with the Docker toolchain, use Docker Swarm.
Now you have some context and understanding of what Kubernetes can do for you. The demo in the webinar covered the key features. Today, I’ll be able to cover some of the details that we didn’t have time for in the webinar session. We will start by introducing some Kubernetes vocabulary and architecture.
What is Kubernetes?
Kubernetes is a distributed system. It introduces its own vernacular to the orchestration space. Therefore, understanding the vernacular and architecture is crucial.
Terminology
Kubernetes “clusters” are composed of “nodes.” The term cluster refers to nodes in the aggregate. “Cluster” refers to the entire running system. A node is a worker machine within Kubernetes, (previously known as “minion”). A node may be a VM or a physical machine. Each node has software configured to run containers managed by Kubernetes’ control plane. The control plane is the set of APIs and software (such as kubectl) that Kubernetes users interact with. The control plane services run on master nodes. Clusters may have multiple masters for high availability scenarios.
The control plane schedules containers onto nodes. In this context, the term “scheduling” does not refer to time. Think of it from a kernel perspective: The kernel “schedules” processes onto the CPU according to many factors. Certain processes need more or less compute, or have different quality-of-service rules. The scheduler does its best to ensure that every process gets CPU time. In this case, scheduling means deciding where to run containers according to factors like per-node hardware constraints and the requested CPU/Memory.
Architecture
Containers are grouped into “pods.” Pods may include one or more containers. All containers in a pod run on the same node. The “pod” is the lowest building block in Kubernetes. More complex (and useful) abstractions come on top of “pods.”
“Services” define networking rules for exposing pods to other pods or exposing pods to the public internet. Kubernetes uses “deployments” to manage deploying configuration changes to running pods and horizontal scaling. A deployment is a template for creating pods. Deployments are scaled horizontally by creating more “replica” pods from the template. Changes to the deployment template trigger a rollout. Kubernetes uses rolling deploys to apply changes to all running pods in a deployment.
Kubernetes provides two ways to interact with the control plane. The kubectl command is the primary way to do anything with Kubernetes. There is also a web UI with basic functionality.
Most of these terms were introduced in some way during the webinar. I suggest reading the Kubernetes glossary for more information.
What is Kubernetes? Demo walkthrough
In the webinar demo, we showed how to deploy a sample application. The sample application is a boiled-down micro-service. It includes enough to demonstrate features that real applications require.
There wasn’t enough time during the session to include everything I planned, so here is an outline for what did make it into the demo:
- Interacting with Kubernetes with kubectl
- Creating namespaces
- Creating deployments
- Connecting pods with services
- Service discovery via environment variables
- Horizontally scaling with replicas
- Triggering a new rollout
- Pausing and resuming a rollout
- Accessing container logs
- Configuring probes
I suggest keeping this post handy while watching the webinar for greater insight into the demo.
The “server” container is a simple Node.js application. It accepts a POST request to increment a counter and a GET request to retrieve the counter. The counter is stored in redis. The “poller” container continually makes the GET request to the server to print the counter’s value. The “counter” container starts a loop and makes a POST request to the server to increment the counter with random values.
I used Google Container Engine for the demo. You can follow along with Minikube if you like. All you need is a running Kubernetes cluster and access to the kubectl command.
How do I deploy Kubernetes?
First, I created a Kubernetes namespace to hold all the different Kubernetes resources for the demo. While it is not strictly required in this case, I opted for this because it demonstrates how to create a namespace and using namespaces is a general best practice.
I created a deployment for redis with one replica. There should only be one redis container running. Running multiple replicas, thus multiple databases, would create multiple sources of truth. This is a stateful data tier. It does not scale horizontally. Then, a data tier service was created. The data tier service matches containers in the data pod and exposes them via an internal IP and port.
The same process repeats for the app tier. A Kubernetes deployment describes the server container. The redis location is specified via an environment variable. Kubernetes sets environment variables for each service on all containers in the same namespace. The server uses REDIS_URL to specify the host, port, and other information. Kubernetes supports environment variable interpolation with $() syntax. The demo shows that composing application-specific environment variable names from Kubernetes provides environment variables. An app tier service is created as well.
Next comes the support tier. The support tier includes the counter and poller. Another deployment is created for this tier. Both containers find the server container via the API_URL environment variable. This value is composed of the app tier service host and port.
At this point, we have a running application. We can access logs via the kubectl logs command, and we can scale the application up and down. The demo configures both types of Kubernetes probes (aka “health checks”). The liveness probe tests that the server accepts HTTP requests. The readiness probe tests that the server is up and has a connection to redis and is thus “ready” to serve API requests.
Security: It’s never too early to start
Even if you’re new to all of this, it’s a good idea to lay a foundation of robust security when digging into an important service like Kubernetes.
You can start with some high-level best practices such as:
- Authenticate securely: use OpenID connect tokens which are based on OAuth 2.0, an open and modern form of authentication.
- Role Based Access Control (RBAC): use RBAC here and everywhere in your cloud deployments to always consider who actually needs access, especially at the admin level.
- Secure all the traffic: whether it’s with network policies or at the more basic pod level via pos security contexts, it’s important to control access within and throughout via the network.
- Stay up-to-date: an easy habit to enact immediately is to conduct updates at a regular cadence in order to help maintain your security posture.
What is Kubernetes? Part 2 – stay tuned
Part 1 of the series focused on answering the question “What is Kubernetes?” and introducing core concepts in a hands-on demo. Part 2 covers using Kubernetes in production operations. You’ll gain insight into how to use Kubernetes in a live ecosystem with real-world complications. I hope to see you there!