Description of icon when needed6 Min Read

My Little Adventure into the World of Container Runtimes

Kubernetes(K8s)currently has many flavors. And out of sheer professional curiosity, I did a comparison of K3s and K8s. K3s, if you didn't know, is a fully compliant K8s distribution that was billed as a K8s alternative with several enhancements, such as halving the memory footprint. As part of this study, I followed the Kubeadm installation instructions to set up a K8s cluster, and I chose containerd as my container runtime. In case you're wondering, a container runtime is responsible for running a container as it abstracts away those syscalls needed to create a containerized environment. Confusing, right? Which is why I didn't fully grasp the whole container runtime story at the time. Let's come back to container runtimes a little later.

Setting up and installing Kubeadm went smoothly. And I had a single node K8s cluster up and running. Since this was a comparison, I followed the K3s instructions to set up a single node cluster. This was also straightforward. At this point, I had both the K8s and K3s single node clusters going, but K3s comes with SQLite embedded (it has a shim layer, which allows for the use of different databases), while K8s uses etcd as its data store. So, my goal became to change the K8s cluster's data store to SQLite. To do this, I used the project called Kine, which allows the use of other databases besides etcd with Kubeadm. The Kine usage documentation was using Docker. And as I mentioned earlier, I had chosen containerd for the container runtime while installing Kubeadm. At this juncture, I needed to figure out how to replicate the Docker CLI commands with containerd. While doing this, I became very intrigued by container runtimes and wanted to find out more about containerd and its relationship with Docker.

Docker, K8s and containerd

Docker is a container engine, meaning that it's a collection of tools used for building and containering applications. Part of the Docker toolkit is a container runtime. K8s automates the deployment, scaling and management of containerized applications. To do this, K8s uses a container runtime. containerd is a popular container runtime that was created by Docker. K8s started off using Docker to manage containers because it needed the container runtime that Docker provided. Until recently, K8s used a shim layer called Dockershim in order to support the use of Docker as its container runtime. As K8s evolved, the need to support more than one container runtime arose and this gave rise to the Container Runtime Interface (CRI): a high level spec for a container runtime. This development made it possible for K8s to work with any container runtime that implements CRI. Docker isolated its container runtime into containerd and made containerd CRI compliant. This allowed K8s to use Docker's container runtime, containerd, without using Docker. It also removed the need for Dockershim in K8s. So, K8s is deprecating Dockershim as of release 1.24. If you're wondering if there are other container runtimes that you can use with K8s, the answer is yes. The container runtime CRI-O also implements the CRI. And CRI-O isnot provided by Docker.

K8s is able to manage container images with either containerd or CRI-O through what is called the Open Container Initiative (OCI). OCI is a spec that defines the standards for container images and running containers. This means that the images that are managed by containerd and CRI-o are OCI compliant images. Below the container runtimes like containerd and CRI-O, there are yet lower level container runtimes like runc. runc spawns and runs containers according to the OCI spec.

In Conclusion

Earlier K8s releases needed Docker as a container runtime in order to manage containers. While newer K8s releases use CRI compliant container runtimes such as containerd to manage containers. If you find yourself using containerd as your container runtime like I did, containerd works with the CLI called crictl. crictl is for CRI compliant container runtimes. By using crictl with containerd I was able to set up Kine with Kubeadm.

For the K3s and K8s comparison, I found that choosing one over the other really depends on the use case. K3s makes setting up a cluster easy. While K8s with Kubeadm gives the user more opportunities to configure the cluster during set up. Set up times for both the K3s and K8s clusters were relatively fast. But the K3s cluster set up time was quicker. Overall, K3s had a smaller resource usage footprint. I'll share K3s and K8s resource usage and Kbench performance results in a subsequent blog post. Stay tuned!

Stay tuned to theOpen Source Blogand follow us onTwitterfor more deep dives into the world of open source contributing.

Attachments

  • Original Link
  • Original Document
  • Permalink

Disclaimer

VMware Inc. published this content on 21 April 2022 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 22 April 2022 09:47:04 UTC.