CKAD

Cards (204)

  • Health Check
    • kubectl cluster-info
  • View all Pods
    • kubectl get pods --all-namespaces
  • View all Nodes
    • kubectl get nodes
  • View Local Containers:
    • crictl ps
  • SSH to Second node
    • ssh node01
  • API Server The primary interaction point for all Kubernetes components and users. This is where we get add delete and mutate objects. The API server delegates state to a backend which is most commonly etcd.
  • kubelet The on-host agent that communicates with the API server to report the status of a node and understand what workloads should be scheduled on it. It communi‐ cates with the host’s container runtime such as Docker to ensure workloads scheduled for the node are started and healthy.
  • Controller Manager A set of controllers bundled in a single binary that handle reconciliation of many core objects in Kubernetes. When desired state is declared e.g. three replicas cas in a Deployment a controller within handles the creation of new Pods to satisfy this state.
  • Scheduler Determines where workloads should run based on what it thinks is the optimal node. It uses filtering and scoring to make this decision.
  • Kube Proxy Implements Kubernetes services providing virtual IPs that can route to backend Pods. This is accomplished using a packet filtering mechanism on a host such as iptables or ipvs.
  • Architecture Kubernetes architectures have many variations. For example many clusters run kube-apiserver kube-scheduler and kube-controller- manager as containers. This means the control-plane may also run a container-runtime kubelet and kube-proxy.
  • Kube Architecture The primary components that make up the Kubernetes cluster. Dashed borders represent components that are not part of core Kubernetes
  • Kube-Proxy Kube-proxy programs hosts to provide a virtual IP (VIP) experience for workloads. As a result internal IP addresses are established and route to one or many underlying Pods. This concern certainly goes beyond running and scheduling containerized workloads. In theory rather than implementing this as part of core Kubernetes the project could have defined a Service API and required a plug- in to implement the Service abstraction. This approach would require users to choose between a variety of plug-ins in the ecosystem rather than including it as core functionality.
  • Service API This is the model many Kubernetes APIs such as Ingress and NetworkPolicy take. For example creation of an Ingress object in a Kubernetes cluster does not guarantee action is taken. In other words while the API exists it is not core functionality. Teams must consider what technology they’d like to plug in to implement this API.
  • Ingress For Ingress many use a controller such as ingress-nginx which runs in the cluster. It implements the API by reading Ingress objects and creating NGINX configurations for NGINX instances pointed at Pods. However ingress-nginx is one of many options. Project Contour implements the same Ingress API but instead programs instances of envoy the proxy that underlies Contour. Thanks to this pluggable model there are a variety of options available to teams.
  • Kubernetes Interfaces A specific example of this interface/plug-in relationship is the Container Runtime Interface (CRI). In the early days of Kubernetes there was a single container runtime supported Docker. While Docker is still present in many clusters today there is growing interest in using alternatives such as containerd or CRI-O. Figure 1-2 dem‐ onstrates this relationship with these two container runtimes.
  • Interfaces In many interfaces commands such as CreateContainerRequest or PortForwardRequest are issued as remote procedure calls (RPCs). In the case of CRI the communication happens over GRPC and the kubelet expects responses such as CreateContainerResponse and PortForwardResponse
  • Kube Interfaces containerd supports a plug-in that acts as a shim between the kubelet and its own interfaces. Regardless of the exact architecture the key is getting the container runtime to execute without the kubelet needing to have operational knowledge of how this occurs for every possible runtime. This concept is what makes interfaces so powerful in how we architect build and deploy Kubernetes clusters.
  • Cloud Provider Integrations Cloud Providers kept having to make their own complete origin version of k8s. As a better solution support was moved into its own interface model e.g. kubernetes/cloud-provider that can be implemented by multiple projects or vendors. Along with minimizing sprawl in the Kubernetes code base this enables CPI(Cloud-Providers Integrations) functionality to be managed out of band of the core Kubernetes clusters. This includes common procedures such as upgrades or patching vulnerabilities.
  • Interfaces • The Container Networking Interface (CNI) enables networking providers to define how they do things from IPAM to actual packet routing. • The Container Storage Interface (CSI) enables storage providers to satisfy intra- cluster workload requests. Commonly implemented for technologies such as ceph vSAN and EBS. • The Container Runtime Interface (CRI) enables a variety of runtimes common ones including Docker containerd and CRI-O. It also has enabled a proliferation of less traditional runtimes such as firecracker which leverages KVM to provision a minimal VM. • The Service Mesh Interface (SMI) is one of the newer interfaces to hit the Kubernetes ecosystem. It hopes to drive consistency when defining things such as traffic policy telemetry and management. • The Cloud Provider Interface (CPI) enables providers such as VMware AWS Azure and more to write integration points for their cloud services with Kuber‐ netes clusters.
  • Open-Container Initiative The Open Container Initiative Runtime Spec. (OCI) standardizes image formats ensuring that a container image built from one tool when compliant can be run in any OCI-compliant container runtime. This is not directly tied to Kubernetes but has been an ancillary help in driving the desire to have pluggable container runtimes (CRI).
  • Kuberenetes Platforms A major trade-off to many prebuilt application platforms is the need to conform to their view of the world. Delegating ownership of the underlying system takes a significant operational weight off your shoulders. At the same time if how the platform approaches concerns like service discovery or secret management does not satisfy your organizational require‐ ments you may not have the control required to work around that issue.
  • Production-Ready Engineering Efforts
  • K8 Junctures From the point of Kubernetes clusters existing we next need to look at a conceptual flow to determine what we should build on top. From the point of Kubernetes existing you can expect to quickly be receiving questions such as: “How do I ensure workload-to-workload traffic is fully encrypted?” “How do I ensure egress traffic goes through a gateway guaranteeing a consistent source CIDR?” “How do I provide self-service tracing and dashboards to applications?” “How do I let developers onboard without being concerned about them becoming Kubernetes experts?”
  • Dev Needs In order to be successful we must constantly be in tune with our development groups to understand where there are issues or potential missing features that could increase development velocity. A good place to start is considering what level of inter‐ action with Kubernetes we should expect from our developers. This is the idea of how much or how little we should abstract.
  • K8 Abstraction A pitfall we’ve seen is an obsession with abstraction causing an inability to expose key features of Kubernetes. Over time this can become a cat-and-mouse game of trying to keep up with the project and potentially making your abstraction as complicated as the system it’s abstracting.
  • Service Mesh Pros & Cons Pros to introducing a service mesh: • Java apps no longer need to bundle libraries to facilitate mTLS. • Non-Java applications can take part in the same mTLS/encryption system. •Lessened complexity for application teams to solve for. Cons: Running a service mesh is not a trivial task. It is another distributed system with operational complexity. Service meshes often introduce features far beyond identity and encryption. The mesh’s identity API might not integrate with the same backend system as used by the existing applications.
  • Building Blocks Let’s wrap up this chapter by concretely identifying key building blocks you will have available as you build a platform. This includes everything from the foundational components to optional platform services you may wish to implement.
  • Container runtime The container runtime will faciliate the life cycle management of our workloads on each host. This is commonly implemented using a technology that can manage containers such as CRI-O containerd and Docker. The ability to choose between these different implementations is thanks to the Container Runtime Interface (CRI). Along with these common examples there are specialized runtimes that support unique requirements such as the desire to run a workload in a micro-vm.
  • Container networking Our choice of container networking will commonly address IP address management (IPAM) of workloads and routing protocols to facilitate communication. Common technology choices include Calico or Cilium which is thanks to the Container Networking Interface (CNI). By plugging a container networking technology into the cluster the kubelet can request IP addresses for the workloads it starts. Some plug-ins go as far as implementing service abstractions on top of the Pod network.
  • Storage integration Storage integration covers what we do when the on-host disk storage just won’t cut it. In modern Kubernetes more and more organizations are shipping stateful workloads to their clusters. These workloads require some degree of certainty that the state will be resilient to application failure or rescheduling events. Storage can be supplied by common systems such as vSAN EBS Ceph and many more. The ability to choose between various backends is facilitated by the Container Storage Interface (CSI). Similar to CNI and CRI we are able to deploy a plug-in to our cluster that understands how to satisfy the storage needs requested by the application.
  • Service routing Service routing is the facilitation of traffic to and from the workloads we run in Kubernetes. Kubernetes offers a Service API but this is typically a stepping stone for support of more feature-rich routing capabilities. Service routing builds on container networking and creates higher-level features such as layer 7 routing traffic patterns and much more. Many times these are implemented using a technology called an Ingress controller. At the deeper side of service routing comes a variety of service meshes. This technology is fully featured with mechanisms such as service-to-service mTLS observability and support for applications mechanisms such as circuit breaking.
  • Secret management Secret management covers the management and distribution of sensitive data needed by workloads. Kubernetes offers a Secrets API where sensitive data can be interacted with. However out of the box many clusters don’t have robust enough secret management and encryption capabilities demanded by several enterprises. This is largely a conversation around defense in depth. At a simple level we can ensure data is encrypted before it is stored (encryption at rest). At a more advanced level we can provide integration with various technologies focused on secret management such as Vault or Cyberark.
  • Identity Identity covers the authentication of humans and workloads. A common initial ask of cluster administrators is how to authenticate users against a system such as LDAP or a cloud provider’s IAM system. Beyond humans workloads may wish to identify themselves to support zero-trust networking models where impersonation of workloads is far more challenging. This can be facilitated by integrating an identity provider and using mechanisms such as mTLS to verify a workload.
  • Authorization/admission control Authorization is the next step after we can verify the identity of a human or workload. When users or workloads interact with the API server how do we grant or deny their access to resources? Kubernetes offers an RBAC feature with resource/verb-level controls but what about custom logic specific to authorization inside our organization? Admission control is where we can take this a step further by building out validation logic that can be as simple as looking over a static list of rules to dynamically calling other systems to determine the correct authorization response.
  • Software supply chain The software supply chain covers the entire life cycle of getting software in source code to runtime. This involves the common concerns around continuous integration (CI) and continuous delivery (CD). Many times developers’ primary interaction point is the pipelines they establish in these systems. Getting the CI/CD systems working well with Kubernetes can be paramount to your platform’s success. Beyond CI/CD are concerns around the storage of artifacts their safety from a vulnerability standpoint and ensuring integrity of images that will be run in your cluster.
  • Observability Observability is the umbrella term for all things that help us understand what’s happening with our clusters. This includes at the system and application layers. Typically we think of observability to cover three key areas. These are logs metrics and tracing. Logging typically involves forwarding log data from workloads on the host to a target backend system. From this system we can aggregate and analyze logs in a consumable way. Metrics involves capturing data that represents some state at a point in time. We often aggregate or scrape this data into some system for analysis. Tracing has largely grown in popularity out of the need to understand the interactions between the various services that make up our application stack. As trace data is collected it can be brought up to an aggregate system where the life of a request or response is shown via some form of context or correlation ID
  • Developer abstractions Developer abstractions are the tools and platform services we put in place to make developers successful in our platform. As discussed earlier abstraction approaches live on a spectrum. Some organizations will choose to make the usage of Kubernetes completely transparent to the development teams. Other shops will choose to expose many of the powerful knobs Kubernetes offers and give significant flexibility to every developer. Solutions also tend to focus on the developer onboarding experience ensuring they can be given access and secure control of an environment they can utilize in the platform.
  • Managed Service Versus Roll Your Own Before we get further into the topic of deployment models for Kubernetes we should address the idea of whether you should even have a full deployment model for Kubernetes. Cloud providers offer managed Kubernetes services that mostly alleviate the deployment concerns. You should still develop reliable declarative systems for provisioning these managed Kubernetes clusters but it may be advantageous to abstract away most of the details of how the cluster is brought up.
  • Managed Services In essence with a managed service you get a Kubernetes control plane that you can attach worker nodes to at will. The obligation to scale ensure availability and manage the control plane is alleviated. These are each significant concerns. Furthermore if you already use a cloud provider’s existing services you get a leg up. For example if you are in Amazon Web Services (AWS) and already use Fargate for serverless com‐ pute Identity and Access Management (IAM) for role-based access control and CloudWatch for observability you can leverage these with their Elastic Kubernetes Service (EKS) and solve for several concerns in your app platform.