Kubernetes

Subdecks (2)

Cards (51)

  • Switching From Monolithic To Microservices
    • That is, an application is built as a single unit, a single code base, and usually, a single executable deployed as a single component. This worked and still works for a lot of companies; however, some companies started to switch to a microservices architecture, so instead of having one monolithic application, it would be broken down into different components, each of these components usually having different business functions, meaning high demand business functions can be scaled without having to scale the entire application. 
  • Why Kubernetes?
    • Imagine you have a Docker container that is running an app that can be accessed externally. Suddenly, this application starts receiving a lot of traffic, so a new container with this application needs to be spun up so the traffic can be distributed between the two. This is where Kubernetes comes in as a container orchestration system. 
    • The term "orchestration" conjures up images of a literal orchestra. This metaphor does have some traction, with the containers being like instruments and Kubernetes being the conductor in control of the flow.
  • Kubernetes:
    • Kubernetes makes an application highly available and scalable. It does this by, for example, duplicating an application (and its db component) and load balancing external requests to this application to an available resource, therefore removing a single point of failure from the architecture and removing bottlenecks, which can slow down application response times.
    • Kubernetes allows workloads to be scaled up or down to match demand.
  • Kubernetes:
    • Kubernetes is also highly portable in that it can be run anywhere with nearly any type of infrastructure and can be used with a single or multi-cloud set, making it a very flexible tool.
    • Another benefit of Kubernetes is its popularity; because of this, many technologies are compatible with it and can be used to make each tool more powerful.
  • Kubernetes Pod
    • Pods are the smallest deployable unit of computing you can create and manage in Kubernetes.
    • You can think of a pod as a group of one or more containers. These containers share storage and network resources. Because of this, containers on the same pod can communicate easily as if they were on the same machine whilst maintaining a degree of isolation. Pods are treated as a unit of replication in Kubernetes; if a workload needs to be scaled up, you will increase the number of pods running.
  • Kubernetes Nodes
    • Kubernetes workloads (applications) are run inside containers, which are placed in a pod. These pods run on nodes. When talking about node architecture, there are two types to consider. The control plane (also known as "master node") and worker nodes. Both of these have their own architecture/components.
    • Nodes can either be a virtual or physical machine. Think of it this way: if applications run in containers which are placed in a pod, nodes contain all the services necessary to run pods.
  • Kubernetes Cluster 
    • At the highest level, we have our Kubernetes Cluster; put simply, a Cluster is just a set of nodes. 
  • Control Plane:
    • Kube-apiserver: The API server is the front end of the control plane and is responsible for exposing the Kubernetes API. The kube-apiserver component is scalable, meaning multiple instances can be created so traffic can be load-balanced.
  • Control Plane:
    • Etcd: Etcd is a key/value store containing cluster data / the current state of the cluster. It is highly available and consistent. If a change is made in the cluster, for example, another pod is spun up, this will be reflected in the key/value store, etcd. The other control plane components rely on etcd as an information store and query it for information
  • Control Plane:
    • Kube-scheduler: The kube-scheduler component actively monitors the cluster. Its job is to catch any newly created pods that have yet to be assigned to a node and make sure it gets assigned to one. It makes this decision based on specific criteria, such as the resources used by the running application or available resources on all worker nodes.
  • Control Plane:
    • Kube-controller-manager: This component is responsible for running the controller processes. There are many different types of controller processes, but one example of a controller process is the node controller process, which is responsible for noticing when nodes go down. The controller manager would then talk to the scheduler component to schedule a new node to come up.
  • Control Plane:
    • Cloud-controller-manager: This component enables communication between a Kubernetes cluster and a cloud provider API. The purpose of this component is to allow the separation of components that communicate internally within the cluster and those that communicate externally by interacting with a cloud provider. This also allows cloud providers to release features at their own pace
  • Control Plane
  • Kubernetes Worker Node 
    • Worker nodes are responsible for maintaining running pods.
  • Kubelet 
    • Kubelet is an agent that runs on every node in the cluster and is responsible for ensuring containers are running in a pod. Kubelet is provided with pod specifications and ensures the containers detailed in this pod specification are running and healthy! It executes actions given to it by the controller manager, for example, starting the pod with a container inside.
  • Kube-proxy 
    • Kube-proxy is responsible for network communication within the cluster. It makes networking rules so traffic can flow and be directed to a pod (from inside or outside of the cluster). Traffic won't hit a pod directly but instead hit something called a Service (which would be associated with a group of pods), and then gets directed to one of the associated pods.
  • Container runtime
    • Pods have containers running inside of them. A container runtime must be installed on each node for this to happen. E.g. Docker, Rkt, runC
  • Communicating Between Components:
    • A Kubernetes cluster contains nodes and Kubernetes runs a workload by placing containers into pods that run on these nodes. Take a look at the graphic below to see how all these components come together.
  • Namespaces
    • In Kubernetes, namespaces are used to isolate groups of resources in a single cluster. For example, say you want to group resources associated with a particular component, or if you are using a cluster to host multiple tenants, to group resources by tenant. Resources must be uniquely named within a namespace, but the same resource name can be used across various namespaces.
  • ReplicaSet
    • As the name suggests, a ReplicaSet in Kubernetes maintains a set of replica pods and can guarantee the availability of x number of identical pods (identical pods are helpful when a workload needs to be distributed between multiple pods). ReplicaSets usually aren't defined directly (neither are pods, for that matter) but are instead managed by a deployment
  • Deployments 
    • Deployments in Kubernetes are used to define a desired state. Once this desired state is defined, the deployment controller (one of the controller processes) changes the actual state to the desired state. Deployments provide declarative updates for pods and replica sets. In other words, as a user, you can define a deployment, let's say, for example, "test-nginx-deployment".
    •  In the definition, you can note that you want this deployment to have a ReplicaSet comprising three nginx pods. Once this deployment is defined, the ReplicaSet will create the pods in the background. 
  • Stateful Applications:
    • Stateful apps store and record user data, allowing them to return to a particular state. For example, suppose you have an open session using an email application and read 3 emails, but your session is interrupted. In that case, you can reload this application, and the state will have saved, ensuring these 3 emails are still read. 
  • Stateless Applications:
    • Stateless applications have no knowledge of any previous user interactions as it does not store user session data. For example, think of using a search engine to ask a question. If that session were to be interrupted, you would start the process again by searching the question, not relying on any previous session data.
    • For these stateless applications, deployments can be used to define and manage pod replicas. Because of the application's stateless nature, replicas can be created with random pod names, and when removed, a pod can be deleted at random.
  • StatefulSets:
    • Enable stateful applications to run on Kubernetes, but unlike pods in a deployment, they cannot be created in any order and will have a unique ID (which is persistent, meaning if a pod fails, it will be brought back up and keep this ID) associated with each pod.
    • In other words, these pods are created from the same specification but are not interchangeable.
  • StatefulSets:
    • StatefulSets will have one pod that can read/write to the database (because there would be absolute carnage and all sorts of data inconsistency if the other pods could), referred to as the master pod. The other pods, referred to as slave pods, can only read and have their own replication of the storage, which is continuously synchronised to ensure any changes made by the master node are reflected.
  • Services: Why Do We Need Them?
    • Kubernetes pods are ephemeral, meaning they have a short lifespan and are spun up and destroyed regularly. Imagine a connection needs to be made to these pods from the web
    • For this connection to happen, an IP address is required. If IP addresses were tied to pods, then these IP addresses would change frequently, causing all kinds of issues; services are used so that a single static IP address can be associated with a pod and its replicas. 
  • Services:
    • A service is placed in front of these pods and exposes them, acting as an access point. Having this single access point allows for requests to be load-balanced between the pod replicas. There are different types of services you can define: ClusterIP, LoadBalancer, NodePort and ExternalName. 
  • Ingress 
    • We have a web application with 2 different features one exposed thru service A and the new feature thru service B
    • If a user requests to access this new feature of the web application; there will need to be some kind of traffic routing in place to ensure this request gets directed to service B. That is where ingress comes in.
    • Ingress acts as a single access point to the cluster and means that all routing rules are in a single resource. 
  • K8s Configuration:
    • To configure this setup, we would require two configuration files, one for the deployment and one for the service
    • Kubernetes config files are typically written in YAML. They can also be made interchangeably using the JSON format, but as per the Kubernetes documentation, it is generally considered best practice to use YAML given its easy, human-readable nature 
  • K8s Config Required Fields:
    • apiVersion: The version of the Kubernetes API you are going to use to create this object. The API version you use will depend on the object being defined.
    • kind: What kind of object you are going to create (e.g. Deployment, Service, StatefulSet).
    • metadata: This will contain data that can be used to uniquely identify the object (including name and an optional namespace).
    • spec: The desired state of the object (for deployment, this might be 3 nginx pods).
  • Service Configurations:
    • As when defining a deployment and service, it is generally best practice to first define the service before the back-end deployment/replicaset that it points to (this is because when Kubernetes starts a container, it creates an env variable for each service that was running when a container started). Here is what our example-service.yaml file looks like:
  • Service Configuration
    • 'selector', we have 'app: nginx':
    • This service will look for apps with the nginx label and will target port 80. An important distinction to make here is between the 'port' and 'targetPort' fields. The 'targetPort' is the port to which the service will send requests, i.e., the port the pods will be listening on. The 'port' is the port the service is exposed on.
  • Deployment Configuration
    • Defines a deployment which controls a ReplicaSet; here, in the outer 'spec' field, we tell Kubernetes we want 3 replicas (identical pods) in this ReplicaSet. 
    • This template field is the template that Kubernetes will use to create those pods and so requires its own metadata field (so the pod can be identified) and spec field (so Kubernetes knows what image to run and which port to listen on). 
    • Note that the port defined here is the same as the one in the service YAML. This is because the service's target port is 80 and needs to match. 
  • Kubectl:
    • We interact with our kube api-server to interact with the cluster
    • kubectl is a command line tool provided by Kubernetes that allows us to communicate with a Kubernetes cluster's control plane