Kubernetes is an open-source, extensible container orchestration system that can run across clusters. Now maintained by the Cloud Native Computing Foundation, it was initially developed by Google, with version 1.0 appearing in 2015. Originally, Kubernet0s depended on Docker for containerization, but it now uses a runtime-agnostic interface, CRI-O, making it compatible with commonly-used services such as AWS and Azure. Today, it is the leading container orchestration system globally, accounting for 56% of container platforms on the public cloud.
What is server virtualization?
Server virtualization is a service that presents computer operating interfaces to users regardless of the underlying hardware. It allows more secure, flexible and extensible provision of resources for applications and services. Virtualization allows users to access a virtual machine (VM) or service without concern for how it is actually hosted. For example, many virtual machines may be hosted on a single physical host. Indeed, they may span multiple hosts in order to make better use of resources.
How Server Virtualization works
Server virtualization abstracts the logical form of a computer away from a single physical host. A virtual machine presents all the aspects one would expect of a physical machine – storage, memory and CPU all serving the user through an operating system. But the user has no need to know what physical infrastructure underlies it. This allows for greater flexibility and extensibility.
From traditional to Virtual deployment
Application hosting has come a long way since the days when a server was simply a piece of hardware sitting in a rack somewhere. Virtualization dates as far back as the 60s. However, it really gained traction for web services in the early 2000s with software like VMWare allowing users to create and deploy virtual machines on different host operating systems. Virtual servers soon became an invaluable resource for businesses, allowing them to extend resources with ease and migrate services at will without the headaches of hardware management. It is on the back of virtualization that the concept of the ‘cloud’ really took hold.
Discover all about Kubernetes!
Download our whitepaper for free!
From Virtual to Container deployment
The concept of containerization builds on virtualization technologies. It represents a further refinement of the logical functions of server infrastructure provision by separating resources and services into ‘containers’. These containers act as building blocks for service provision and be swapped in and out or extended as required.
A virtual machine may be separate from its physical infrastructure, but it retains the functionality of a full operating system and uses a corresponding bulk of resources. Containers, by contrast, represent much smaller and more streamlined units of operation. For comparison, a modern home computer can probably run three VMs concurrently, but it would happily accommodate fifty or sixty containers.
Container deployment can more accurately match the modular functionality of an app. That means that when releasing software, container deployment allows the minimum-required resources to be used. The rule of thumb with containers is to start small and build up as required. A container holds its hosted application in isolation but shares its hosts OS or system kernel with other containers on the same infrastructure.
When to use Kubernetes
Container engines like Docker and Containerd do a good job of packaging and running containers but can be difficult to manage at scale. Kubernetes is what is known as a container orchestrator. That is, it coordinates and manages defined groups of containers in what are known as Kubernetes pods. Let’s take a look at some key concepts:
- Kubernetes pods. A pod is a colocated set of containers, sharing basic features, such as IP address, namespace and storage. Containers within a pod communicate as if within a single machine.
- Replica set. This is a means of managing the lifecycle and deployment of pods. A replica set allows duplicate instances of pods to be deployed at will, providing rapid failover should one fail.
- Service. Each Kubernetes pod can be hosted platform-agnostically, identified only by its IP address. The service layer provides a consistent interface for a set of pods, with built-in load-balancing to route requests appropriately.
- Label. A service needs to know how to find its pods. Pods advertise themselves by means of a label. A label is simply the means by which the service load-balancer identifies which pods to route traffic into.
Benefits of Kubernetes
Kubernetes has been called a ‘platform for building platforms’. Its real benefits become apparent when hosting large applications that use a number of different services and containers.
When early container services like Heroku appeared on the scene, they marked a step-change in how apps were deployed, allowing modular and extensible service provision. The modular trend continued with AWS, allowing development and deployment teams to use only the resources they needed with payment terms to match.
However, scaling proved harder than was first imagined. Furthermore, moving between different platforms like Google Cloud or Azure turned out to be problematic as each has its own APIs and specific requirements. The idea of a full set of functionality that could be moved and extended at will was realised with Kubernet0s. Kubernet0s allows large-scale complex applications to be scaled and dynamically load-balanced, with bottlenecks avoided.
How to use Kubernetes correctly
As an application expands, the number of containers it uses can increase massively. Kubernetes offers many advantages for streamlined container deployment, but some care is needed to get the best out of it.
As with any extensible cloud-based server architecture, it pays to use the best development and deployment methodologies. Using test-driven development (TDD) will help to ensure that deployed code is properly release-ready and facilitate continuous integration and deployment (CI/CD). This removes a lot of the back-and-forth between QA and developers and allows deployment to be largely automated, which is essential for effective Kubernetes use.
It’s also important to keep application architecture as modular as possible to get the best out of container deployment. A modular approach allows you to identify bottlenecks for more efficient and targeted resource scaling.
Knowing when to introduce Kubernetes is also valuable. Kubernetes does not simplify container deployment by magic, so you’ll need more human resources to configure and manage your Kubernetes set-up. Of course, the benefits can, and should, easily outweigh the costs, but bear in mind that it might not be worthwhile for simpler setups.
Democratising containers: Kubernetes and PaaS
So how does Kubernetes relate to PaaS architectures? PaaS is typically used in scenarios where large and complex applications can benefit from the bundled provision of services – much like Kubernetes. So is Kubernetes PaaS?
Is Kubernetes PaaS, SaaS or IaaS?
Platform as a service (PaaS), software as a service (Saas) and infrastructure as a service (IaaS) – these days it seems that everything is offered as a service. PaaS was perhaps the most obviously useful to developers as it emerged around 2010. PaaS freed developers from the troublesome admin of deployment by automatically handling configuration, security and scaling of hosting architecture. However, it also led to a certain amount of inflexibility at the platform level.
Kubernetes can reasonably be called a variant of PaaS, but it also hands back much more control to engineers in the configuration and scaling options of the platform. This ‘democratization of containers’ was originally led by Docker and is now being forwarded by Kubernetes. It puts container config back in the hands of coders rather than administrators, tying deployment and development more closely.
What will come next for Kubernetes?
This adoption of Kubernetes by PaaS has led to what can be called CaaS (containers as a service). Developers can no longer deploy and forget, since the very architecture of their apps is now mirrored in their deployment architecture. Wrapping containers in Kubernetes pods allows them to be deployed and exposed as services.
The future, then, most likely lies in further standardization of best practices with deployment. Many are now talking of ‘platform as a product’, meaning that deployment platforms used internally are of externally marketable qualities. It also means documentation, testing, interaction protocols and more are refined with a focus on quality.
It is possible to identify certain cyclical trends in deployment practices, as technologies are abstracted away from development concerns, only to be later reintegrated with them. With Kubernetes, development and deployment are close concerns again, with the architecture of applications moving further towards a microservice architecture.
The loose coupling of microservices does not remove complexity from applications, but it does allow for greater flexibility and scalability, as well as better alignment with functional requirements. And with the systems abstractions, software deployment can become even more technology-agnostic and thus future-oriented.