Applications

Cards (22)

  • Choosing a Deployment Platform:
  • Choosing a Deployment Platform:
    • First, ask yourself whether you have specific machine and OS requirements. If you do, then Compute Engine is the platform of choice.
    • If you have no specific machine or operating system requirements then the next question to ask is whether you are using containers. If you are, then you should consider Google Kubernetes Engine or Cloud Run, depending on whether you want to configure your own Kubernetes cluster.
  • Choosing a Deployment Platform:
    • If you are not using containers, then you want to consider Cloud Functions if your service is event-driven, and App Engine if it’s not.
  • Compute Engine:
    • Is a great solution when you need complete control over your operating systems, or if you have an application that is not containerized, an application built on a microservice architecture, or an application that is a database.
    • Instance groups and autoscaling as shown on this slide allow you to meet variations in demand on your application.
  • Managed instance groups create VMs based on instance templates. Instance templates are just a resource used to define VMs and managed instance groups. The templates define the boot disk image or container image to be used, the machine type, labels, and other instance properties like a startup script to install software from a Git repository.
  • The virtual machines in a managed instance group are created by an instance group manager. Using a managed instance group offers many advantages, such as autohealing to re-create instances that don’t respond and creating instances in multiple zones for high availability.
  • I recommend using one or more instance groups as the backend for load balancers. If you need instance groups in multiple regions, use a global load balancer, and if you have static content, simply enable Cloud CDN as shown on the right.
  • Google Kubernetes Engine, or GKE, provides a managed environment for deploying, managing, and scaling containerized applications using Google infrastructure. The GKE environment consists of multiple Compute Engine virtual machines grouped together to form a cluster. GKE clusters are powered by the Kubernetes open source cluster management system
  • GKE:
    • Kubernetes provides the mechanisms with which to interact with the cluster. Kubernetes commands and resources are used to deploy and manage applications, perform administration tasks and set policies, and monitor the health of deployed workloads.
    • A cluster consists of at least one cluster control plane and multiple worker machines that are called nodes. These control plane and node machines run the Kubernetes cluster orchestration system. Pods are the smallest, most basic deployable objects in Kubernetes. A pod represents a single instance of a running process in a cluster.
  • GKE:
    • A pod represents a single instance of a running process in a cluster. Pods contain one or more containers, such as Docker containers, that run the services being deployed. You can optimize resource use by deploying multiple services to the same cluster.
  • Cloud Run, on the other hand, allows you to deploy containers to a Google-managed Kubernetes cluster. A big advantage is that you don’t need to manage or configure the cluster. The services that you deploy must be stateless, and the images you use must be in Container Registry. Cloud Run can be used to automate deployment to Anthos GKE clusters. You should do this if you need more control over your services, because it will allow you to access your VPC network, tune the size of compute instances, and run your services in all GKE regions
  • The screenshot on the right shows a Cloud Run configuration where the container image URL is specified along with the deployment platform, which can be fully managed Cloud Run or Cloud Run for Anthos.
  • App Engine:
    • Is a fully managed, serverless application platform supporting the building and deploying of applications. Applications can be scaled seamlessly from zero upward without having to worry about managing the underlying infrastructure. App Engine was designed for microservices. For configuration, each Google Cloud project can contain one App Engine application, and an application has one or more services.
  • App Engine:
    • Each service can have one or more versions, and each version has one or more instances. App Engine supports traffic splitting so it makes switching between versions and strategies such as canary testing or A/B testing simple.
    • The diagram on the right shows the high-level organization of a Google Cloud project with two services, and each service has two versions. These services are independently deployable and versioned.
  • Let me show you a typical App Engine microservice architecture.
    • This could be an example of a retailer that sells online. Here App Engine serves as the frontend for both web and mobile clients. The backend of this application is a variety of Google Cloud storage solutions with static content such as images stored in Cloud Storage, Cloud SQL used for structured relational data such as customer data and sales data, and Firestore used for NoSQL storage such as product data. Firestore has the benefit of being able to synchronize with client applications.
  • Typical App Engine Microservice Architecture:
    • Memcache is used to reduce the load on the datastores by caching queries, and Cloud Tasks are used to perform work asynchronously outside a user request (or service-service request). There’s also a batch application that generates data reports for management.
  • Cloud Functions are a great way to deploy loosely coupled, event-driven microservices. They have been designed for processing events that occur in Google Cloud. The functions can be triggered by changes in a Cloud Storage bucket, a Pub/Sub message, or HTTP requests. The platform is completely managed, scalable, and inexpensive. You do not pay if there are no requests, and processing is paid for by execution time in 100ms increments.
  • When an image is uploaded to a Cloud Storage bucket, it triggers an OCR Cloud Function that identifies the text in the image using Google’s Cloud Vision API. Once the text has been identified, this service then publishes a message to a Pub/Sub topic for translation, which triggers another Cloud Function that will translate the identified text in the image using the Cloud Translation API. After that, the translator Cloud Function will publish a message to a file write topic in Pub/Sub, which triggers a Cloud Function that will write the translated image to a file
  • App Engine is great for those who just want to focus on their code and not worry at all about the underlying infrastructure like networks, load balancers, and autoscalers which are completely managed by App Engine
  • Now, sometimes developers want more freedom to customize their environments. Google Kubernetes Engine, or GKE, provides a balance where you have a lot of customization over your environment similar to Compute Engine. However, GKE also helps you optimize your spend by allowing you to deploy multiple services into the same cluster of virtual machines. This provides an excellent balance between flexibility, portability, and cost optimization.
  • Kubernetes can get pretty complicated though. That’s where Cloud Run comes in. Cloud Run allows you to deploy your own stateless, Dockerized services onto Kubernetes clusters that are managed by Google. Google Cloud takes care of the hard parts of managing the cluster and configuring the load balancers, autoscalers, and health checkers, so that you just focus on the code.
    • Deploying a single application on all of these compute platforms might help you choose the right platform for your own services.
    • Compute Engine if you need complete control over your deployment environment; Google Kubernetes Engine if you want the flexibility, portability and automation that is provided by Kubernetes; and App Engine and Cloud Run if you want a completely managed platform as a service.