Name must be unique within a namespace. Is required when creating resources,
although some resources may allow a client to request the generation of an
appropriate name automatically. Name is primarily intended for creation
idempotence and configuration definition. Cannot be updated.
Logs-based metrics are good for counting the number of log entries and tracking the distribution of a value in your logs. In this case, you will use the logs-based metric to count the number of errors in your frontend service. You can then use the metric in both dashboards and alerting.
query finding ERRORS in logs for recommendationservice pods:
Locust is an open-source load generator, which allows you to load test a web app. It can simulate a number of users simultaneously hitting your application endpoints at a certain rate.
What is Kubernetes? Kubernetes is an orchestration framework for software containers. Containers are a way to package and run code that's more efficient than virtual machines. Kubernetes provides the tools you need to run containerized applications in production and at scale.
What is Google Kubernetes Engine? Google Kubernetes Engine (GKE) is a managed service for Kubernetes
Billing in Google Cloud:
Billing in Google Cloud is established at the project level.
When creating a Google Cloud project, a billing account must be linked to it.
The billing account contains all billing information, including payment options.
A billing account can be associated with one or more projects.
Project Billing and Free Services:
Projects without a linked billing account can only use free Google Cloud services.
This prevents unintended charges for projects not explicitly linked to billing.
Billing Options:
Billing can be configured to charge automatically and generate monthly invoices or at predefined threshold limits.
Billing subaccounts allow separation of billings, commonly used by customers reselling Google Cloud services to manage billing for their clients.
Tools for Cost Management:
Budgets and Alerts:
Budgets can be set at the billing account or project level.
Alerts can be configured to notify when costs approach the budget limit.
Webhooks can be used to automate actions in response to billing alerts, like shutting down resources or filing trouble tickets.
Billing Export to BigQuery:
Automatically exports detailed billing data to a BigQuery dataset.
Enables detailed analysis or visualization using tools like Google Data Studio.
Exporting to a file is deprecated, available only for existing customers using this feature.
Tools for Cost Management:
Reports:
A visual tool in the Console for monitoring expenditure based on project or services.
Quotas in Google Cloud:
Quotas are implemented to prevent unexpected billing charges due to errors or malicious attacks.
Quotas apply at the Google Cloud project level.
Two types of quotas: rate quotas (reset after a specific time) and allocation quotas (do not reset; resources need to be freed up).
Example Quotas:
GKE service has a rate quota of 3,000 API calls per minute for administrative configuration, not affecting application calls.
Allocation quotas govern the number of resources, e.g., a project having a quota of 15 Virtual Private Cloud networks.
Managing Quotas:
Some quotas can be increased by requesting an increase from Google Cloud Support.
Quotas may increase automatically based on product usage.
Google Cloud Console allows users to explicitly lower quotas for specific projects.
Introduction to Containers:
Containers are a modern approach to application deployment with key features and advantages over traditional methods.
Compared to deploying applications directly to virtual machines, containers offer more efficiency and portability.
Historical Deployment Methods:
In the past, deploying applications involved setting up physical computers with dedicated purposes.
This process required physical space, power, cooling, and network connectivity.
Each computer typically served a single purpose, leading to resource wastage and complex maintenance.
Virtualization Era:
Virtualization emerged as a solution, allowing multiple virtual servers and operating systems on the same physical computer.
A hypervisor, like KVM, facilitated running virtual machines (VMs) efficiently.
Virtualization improved deployment speed, resource utilization, and portability, as VMs could be imaged and moved.
Challenges with Virtualization:
VMs had drawbacks, such as slow boot times and challenges in moving between hypervisor products.
Running multiple applications in a single VM created issues of resource sharing and dependency conflicts.
Dedicated VMs for Each Application:
To solve dependency issues, some adopted a VM-centric approach with a dedicated VM for each application.
Each application maintained its dependencies, ensuring isolation.
This method, while effective, became impractical at scale, resulting in redundancy and inefficiency.
Containerization as a Solution:
Containers offer a more efficient solution by abstracting at the level of the application and its dependencies, without virtualizing the entire machine.
Containers isolate user spaces, containing only the necessary code and dependencies for the application to run.
Key Characteristics of Containers:
Lightweight: Containers are lightweight because they don't carry a full operating system.
Quick Startup and Shutdown: Containers can be created and shut down rapidly, as they involve starting and stopping processes, not booting entire VMs.
Resource Efficiency: Containers can be scheduled tightly onto the underlying system, making them resource-efficient.
Abstraction at Application Level: Containers abstract at the level of the application and its dependencies, providing a more practical solution than VMs
Developer Benefits of Containers:
Developers appreciate the abstraction level as they can focus solely on the applicationcode.
Containers allow for efficient development and deployment without worrying about underlying system details.
Containerization is the next step in code management evolution, providing a delivery vehicle for lightweight, portable, and resource-efficient applications.
Advantages of Containers:
High Performance: Containers deliver high performance and scalability.
Portability: Containers are portable, running consistently across different environments with the same Linux kernel.
Incremental Changes: Developers can make incremental changes to containers based on production images, facilitating quick deployment.
Microservices Design Pattern: Containers make it easier to adopt the microservices design pattern, enabling loosely coupled, fine-grained components.
Containers
Containers and Images:
An image encompasses an application and its dependencies, serving as a package for easy deployment.
A container is a running instance of an image, providing a lightweight, isolated environment.
Role of Docker:
Docker is a tool that facilitates both the creation and running of applications in containers.
Open-source technology, but lacks container orchestration capabilities at scale compared to Kubernetes.
Container Foundation:
Containers are not an intrinsic feature of Linux but derive their power from various technologies.
The foundation includes:
Linux Processes: Each process has its own virtual memory address space.
Linux Namespaces: Control what an application can see, such as process IDs, directory trees, and IP addresses.
cgroups: Control resource consumption, including CPU time and memory.
Union File Systems: Efficiently encapsulate applications and dependencies into clean, minimal layers.
Container Image Structure:
A container image is structured in layers, specified by instructions in a container manifest (e.g., Dockerfile).
Each layer is read-only, with a writable, ephemeral topmost layer during container runtime.
Layers are organized from least likely to change to most likely to change.
Dockerfile Example:
FROM: Creates a base layer pulled from a public repository (e.g., Ubuntu Linux runtime environment).
COPY: Adds a new layer containing files copied from the build tool's current directory.
RUN: Builds the application using a specified command, creating a third layer.
CMD: Specifies the command to run within the container upon launch.
Multi-Stage Build Process:
Modern best practice is to separate the build and runtime environments.
A multi-stage build process involves one container building the final executable image, and another container receiving only what's needed to run the application.
Container Layers and Data Persistence:
Containers have a writable layer for changes during runtime, but it's ephemeral.
Permanent data storage must be external to a running container image.
Multiple containers can share access to the same underlying image but maintain their own data state.
Advantages of Container Layers:
Smaller images due to differences between layers.
Faster updates as only differences need to be copied.
Efficient storage and resource utilization.
Popular Container Images:
Examples include "ubuntu," "Alpine" (noted for its small size), and the "nginx" web server.
Artifact Registry is a centralized place for storing container images, language, and OS packages.
Building Containers with Docker and Cloud Build:
Docker is a widely used command-line tool for building containers.
Google's managed service, Cloud Build, integrates with IAM and retrieves source code from various repositories.
Steps in Cloud Build can include fetching dependencies, compiling source code, running tests, and using tools like Docker, Gradle, and Maven.
Cloud Build delivers newly built images to various execution environments, including GKE, App Engine, and Cloud Functions.
Containers are structure in Layers:
The tool you use to build the image reads instructions from a file called the “container manifest. In the case of Docker-formatted container images, that’s called a Dockerfile. Each instruction in the Dockerfile specifies a layer inside the container image. Each layer is read-only. (When a container runs from this image, it will also have a writable, ephemeral topmost layer.)
You can use Artifact Registry with other Google Cloud services namely IAM for access control, KMS for customer managed encryption keys, Cloud Build for CI/CD and scan for container vulnerabilities with Container Analysis.
Cloud Build Delivering Built Images to Services
Building containers with DockerFile and Cloud Build
You can write build configuration files to provide instructions to Cloud Build as to which tasks to perform when building a container. These build files can fetch dependencies, run unit tests, analyses and more.
Cloud Build also supports custom build configuration files.
Example Cloud Build.yaml
The true power of custom build configuration files is their ability to perform other actions, in parallel or in sequence, in addition to simply building containers: running tests on your newly built containers, pushing them to various destinations, and even deploying them to Kubernetes Engine.
Introduction to Kubernetes:
Kubernetes is a popular container management and orchestration solution designed to handle containerized applications efficiently.
Challenges Addressed by Kubernetes:
As organizations embrace containers, managing an increasing number of containers becomes challenging.
Containers lack a built-in network fabric for communication.
Kubernetes addresses these challenges by providing a robust solution for orchestrating and managing container infrastructure.
Key Characteristics of Kubernetes:
Container-Centric Management: Kubernetes is a container-centric management environment, automating various aspects of containerized applications.
Origin: Initially originated by Google and later donated to the open-source community.
Cloud Native Computing Foundation (CNCF): Currently a project of CNCF, emphasizing vendor-neutral collaboration.
Key Characteristics of Kubernetes:
Management Features: Kubernetes automates deployment, scaling, load balancing, logging, monitoring, and other management features.
Platform-as-a-Service (PaaS) Features: Encompasses PaaS features, providing an abstraction for application developers.
Infrastructure-as-a-Service (IaaS) Features: Offers IaaS features, allowing user preferences and configuration flexibility.
Declarative Configuration:
Kubernetes supports declarative configurations, where the desired system state is described, and Kubernetes ensures the system conforms to this state despite failures.
Declarative configuration reduces work and the risk of errors, providing a documented desired state.
Imperative Configuration:
While Kubernetes allows imperative configuration, it is recommended for quick temporary fixes and building a declarative configuration.
The strength of Kubernetes lies in automatically maintaining a declared system state.
Kubernetes Features:
Workload Types: Supports stateless applications (e.g., Nginx or Apache), stateful applications with persistent storage, batched jobs, and daemon tasks.
Auto-scaling: Automatically scales containerized applications based on resource utilization, adhering to specified resource request levels and limits.
Ecosystem of Plugins: Developers can extend Kubernetes using a rich ecosystem of plugins and add-ons.
Custom Resource Definitions (CRDs): Kubernetes supports CRDs, allowing the declarative management model to be extended to various other resources.
Key Features of GKE:
Managed Kubernetes Service: GKE is a fully managed Kubernetes service on Google infrastructure.
Deployment, Management, and Scaling: Helps deploy, manage, and scale Kubernetes environments for containerized applications on Google Cloud.
Component of Google Cloud Compute Offerings: GKE is a component of Google Cloud's compute offerings, facilitating easy migration of Kubernetes workloads to the cloud.
Fully Managed and Container-Optimized OS:
GKE is fully managed, eliminating the need to provision underlying resources.
Utilizes a container-optimized operating system maintained by Google, optimized for quick scaling with minimal resource footprint.
GKE Autopilot:
Autopilot Mode: A mode in GKE where Google manages the cluster configuration, including nodes, scaling, security, and preconfigured settings.
Container Hosting Nodes: Nodes are the virtual machines hosting containers in a GKE cluster.
Auto-Repair and Auto-Upgrade:
GKE's auto-repair feature repairs unhealthy nodes by periodically conducting health checks.
Auto-upgrade feature: Ensures clusters are automatically upgraded with the latest stable version of Kubernetes.
Scaling and Integration:
GKE supports scaling of both workloads and clusters.
Integrates with Google Identity and Access Management (IAM) for access control.
Integrates with Google's Operations Suite for monitoring and managing services, containers, applications, and infrastructure.
Scaling and Integration:
Integrates with Cloud Monitoring for understanding application performance.
Seamlessly integrates with Google's Virtual Private Clouds (VPCs) and utilizes networking features.
Integrates with Google’s Cloud Build and Artifact Registry, allowing automation of deployment using securely stored private container images.