Kubernetes Interfaces
Any good platform project needs to be extensible. Kubernetes accomplishes this through APIs.
A brief overview of the major interfaces in Kubernetes for extensibility.
Container runtime interface (CRI) – The container runtime is responsible for running and managing containers. Before the CRI, nodes just ran Docker (via dockershim
built into the kubelet), although different implementations existed (e.g., rkt
from CoreOS). The implementation was eventually split out, and Docker helped create a blessed open container runtime containerd
, which Docker proper now uses. Some other use cases:
Run VMs instead of containers (Kata Containers, firecracker-containerd)
Virtualize the entire Kubelet (virtual kubelet)
Not Docker (cri-o)
Container network interface (CNI) – The interface responsible for managing network interfaces for the containers — network connectivity for running containers and removing allocated resources when containers are deleted. CNI is used within the CRI but also at the cluster level – Calico, Weave, Cilium, and other network overlays use CNI. Other container orchestrators also have adopted CNI: AWS ECS, Nomad, OpenShift, Singularity, Apache Mesos, and Cloud Foundry.
Container storage interface (CSI) – Manages container volumes – persistent or ephemeral storage that is attached to running containers. Most cloud providers implement CSI for their storage solutions: AWS EFS, Google Cloud Filestore, or simply just NFS.
Custom Resources Definitions (CRD) – An interface used to extend the Kubernetes API server with custom API objects. This means that you can manage your own resources in addition to the built-in ones like pods, deployments, and nodes. Combined with a custom controller, this means that even complex apps can be managed natively through regular Kubernetes tooling and without a modified cluster.
API aggregation is another way to extend the API server by proxying certain API requests to a separate endpoint.