Security

Cards (42)

  • When you move an application to Google Cloud, Google handles many of the lower layers of the overall security stack. Because of its scale, Google can deliver a higher level of security at these layers than most of its customers could afford to do on their own. This does not mean that Google is responsible for all the security aspects.
  • Google Cloud security is a shared responsibility between you and Google, so it is important that there is a clear separation of duties and there is no ambiguity between what is provided by the platform and what you are responsible for. For this there needs to be transparency. There are certain actions you as a client are responsible for and some that Google is responsible for. Google Cloud provides the controls and features required to leverage the platform together with the tools to monitor your services.
  • Google implements security in layers
    • At the base is custom-built hardware and servers that are loaded using a verified boot loading system. All the way through the stack, security is at the forefront. When you take your part in security, for example, establishing firewall rules or configuring IAM, as long as they are configured correctly, you have a safe environment. There are tools Google Cloud provides that can be used for monitoring and auditing your networks, which we will discuss shortly, or you can also install your own tools.
  • Principle of Least Privilege
    • The principle of least privilege is the practice of granting a user only the minimal set of permissions required to perform a duty. This should apply to machine instances and processes, as well as users. Google Cloud provides IAM to help apply this principle. You can use it to identify users with their login or identify machines using service accounts. Roles should be assigned to users and service accounts to restrict what they can do, always following the principle of least privilege
  • Separation of duties is another best practice and it has two primary objectives:
    1. Prevention of conflict of interest
    2. The detection of control failures, for example, security breaches, information theft - From a practical perspective, this means that no one person can change or delete data without being detected. No one person can steal sensitive data, and no single person is in charge of designing, implementing, and reporting on sensitive systems.
  • Seperation of Duties
    • For example, a developer who writes the code should not be responsible for deploying that code, and anybody that has the permission to deploy should not be able to change the code. One approach to achieve this separation of duties in Google Cloud is to use multiple projects to separate duties. Different people can be given suitable rights to different projects, with these permissions following the principle of separation of duties. Folders are especially useful for organizing multiple projects.
  • Logs:
    It is also vital to audit Google Cloud logs to discover attacks and potential security breaches. All Google Cloud services write to audit logs so there is a rich source of information available. These logs include admin, data access, VPC flow, firewall, and system logs, so an in-depth view of activity is provided for audit.
  • Now, moving to the cloud often requires maintaining compliance with regulatory requirements or guidelines. Google Cloud meets many third-party and government compliance standards worldwide. While Google Cloud has been certified as secure, for example to ISO/IEC 27001, HIPAA, and SOC 1, that does not mean your application running on Google Cloud is certified. Your concern should always be on what you build.
  • Security Command Center provides access to organizational and project security configuration
    • Google Cloud also offers the Security Command Center, which provides access to organizational and project security configuration. As you can see in this screenshot, the Security Command Center provides a dashboard that reports security health analysis, threat detections, anomaly detection, and a summary report. Once a threat is detected, a set of actionable recommendations is provided.
  • Granting Access:
    • When granting people access to your projects, you should add them as members and assign them one or more roles. Roles are simply a list of permissions. To see what permissions are granted to roles, use the Cloud Console as shown on the right. Here you can see the role bigquery_user and the associated 15 permissions the role has assigned to it. You can assign these predefined roles to members or customize your own roles.
  • Granting Access
    • Now, any member added to your project will be identified by their login. For simplifying management of members and their permissions, I recommend that you create groups. That way you just need to add members to a group, and new members automatically acquire the permissions of the group. The same applies for removing members from a group, which also removes the permissions of that group.
  • IAM
    • I also recommend using organizational policies and folders to simplify securing your environments and managing your resources. Organizational policies apply to all resources underneath an organization, and IAM policies are also inherited top to bottom, as shown on the right. Folders inherit policies of the organization, projects inherit policies of the folders, and so on.
    • Roles should be granted to groups, not individuals, because it simplifies management. Make sure to define groups carefully and make them more granular than job roles. It’s better to use multiple groups for better control.
  • IAM
    • When it comes to roles, it is better to use predefined roles over custom roles. Google has defined the roles for a reason, and it should be an exceptional case that a role does not fit your use case. When granting roles, remember the principle of least privilege: always grant the smallest scope required. Owner and editor roles should be limited; these are not or should not be required by the majority of users.
  • Cloud Identity-Aware Proxy, or Cloud IAP. Cloud IAP
    • Provides managed access to applications running in App Engine standard environment, App Engine flexible environment, Compute Engine, and GKE. It allows employees to securely access web-based applications deployed on Google Cloud without requiring a VPN. Administrators control who has access, and users are required to log on to gain access to the applications. The screenshots on the right show Cloud IAP being enabled on an App Engine application and the dialog for adding new members or permissions.
  • Identity Platform
    • Google Cloud also offers Identity Platform as a customer identity and access management (CIAM) platform for adding identity and access management to applications. In other words, Identify Platform provides sign-up and sign-in for end user applications. Now, you need to select a service provider to use Identity Platform. A broad range of protocol support is available including SAML, OpenID, Email and password, Phone, Social, and Apple. This graphic shows a part of the configuration with a list of potential providers.
  • Service Account
    • A service account is a special kind of account used by an application, a virtual machine instance, or a GKE node pool. Applications or services use service accounts to make authorized API calls. The service account is the identity of the service and defines permissions which control the resources the service can access.
  • Service Account
    • A service account is both an identity and a resource. A service account is used as an identity for your application or service to authenticate, for example, a Compute Engine VM running as a service account. To give the VM access to the necessary resources, you need to grant the relevant IAM roles to the service account.
  • Service Accounts
    • At the same time, you need to control who can create VMs with the service account so random VMs cannot assume the identity. Here, the service account is the resource to be permissioned. You assign the ServiceAccountUser role to the users you trust to use the service account
  • Service Accounts:
    • Each service account is associated with public/private RSA key-pairs that are used to authenticate to Google. These keys can be Google-managed or user-managed.
    • Google-managed keys, both the public and private keys, are stored by Google, and they are rotated regularly. The maximum usage period is two weeks. For user-managed keys, the developer owns both public and private keys. They can be used from outside Google Cloud. User-managed keys can be managed by the IAM API, gcloud command-line tool, or the service accounts page in the Cloud Console.
  • Service Accounts
    • User-managed keys are extremely powerful credentials, and they can represent a security risk if they are not managed correctly. You can limit their use by applying the constraints/iam.disableServiceAccountKeyCreation Organization Policy Constraint to projects, folders, or even your entire organization. After applying the constraint, you can enable user-managed keys in well-controlled locations to minimize the potential risk caused by unmanaged keys. Consider using Cloud Key Management Service (Cloud KMS) to help securely manage your keys.
  • Service Accounts:
    • It is possible to create up to 10 key-pairs per service account to support key rotation.
    • For developers to gain controlled access to resources without acquiring access to the Cloud Console, it is possible to configure the gcloud command-line utility to use service account credentials to make requests. The command on this slide, gcloud auth activate-service-account, serves the same purpose as gcloud auth login but uses the service account instead of user credentials. The key file contains the private key in JSON format
  • Remove external IPs to prevent access to machines outside their network
    • Several options are available for securely communicating with VMs that do not have public IP addresses. These services do not have a public IP address because they are deployed to be consumed by other instances in the project or maybe through Dedicated Interconnect options. However, for those instances that do not have an external IP address, it can be a requirement to gain external access, for instance for updates or patches to be applied.
    • Bastion host for external access to private machines, Identity-Aware Proxy to enable SSH access, or Cloud NAT to provide egress to the internet for internal machines. VM instances that only have internal IP addresses can also use Private Google Access. The diagram shows an external client accessing VM resources via a bastion host.
    • The VM is behind a firewall where access can be filtered. Whichever method you choose, all internet traffic should terminate at a load balancer, third-party firewall, or API gateway, or through IAP. That way internal services cannot be launched and get public IP addresses.
  • Private Google Access
    • Now, VM instances that only have internal IP addresses can use Private Google Access to access Google services that have external IP addresses. The diagram on the right shows a Compute Engine instance accessing a Cloud Storage bucket using its internal IP address. Private Google Access must be enabled when creating the subnet. You can achieve this either with the gcloud command shown here or through the Cloud Console
  • Firewalls:
    • Regardless of whether your VM instances have public IP addresses, you should always configure firewall rules to control access. By default, ingress on all ports is denied and all egress is allowed. It’s your responsibility to define separate rules to allow or deny access to specific instances for specific IP ranges, protocols, and ports.
  • Firewalls:
    • This graphic shows some scenarios where firewall rules can be configured. Egress from Compute Engine to external servers is the first scenario. For ingress, firewall rules should be configured if direct access to an instance is being provided or if via a load balancer. The right-hand graphic shows the scenario of VM instance-to-instance communication. Firewall rules should be considered here to control access also. Remember, you’re still responsible for application-level security
  • Cloud Endpoints
    • If you need to manage APIs, you can use Cloud Endpoints. Endpoints is an API management gateway that helps you develop, deploy, and manage APIs on any Google Cloud backend. It provides functionality to protect and monitor your public APIs, control who has access—using, for example, Auth0—and validate every call with a JSON Web Token signed with a service account private key. Cloud Endpoints also integrates with Identity Platform for authentication.
  • TLS:
    • All Google Cloud service endpoints use HTTPS. I recommend that you use TLS for your service endpoints, and it is your responsibility to configure your service endpoints for TLS. When configuring load balancers, only ever create secure frontends. This dialog shows the configuration of a frontend and the protocol selected is HTTPS, with the certificate also being selected.
  • DDoS
    • Google provides infrastructure DDoS support through global load balancers at level 3 and level 4 traffic. If you have enabled CDN, this will also protect backend resources because a DDos results in a cache hit instead of hitting your resources, as shown on the right
  • Cloud Armor
    • For additional features over the built in DDoS protection, you can use Google Cloud Armor to create network security policies. For example, you can create allow lists that allow known/required addresses through, and deny lists to block known attackers.
    • This dialog shows a typical security policy configuration where you begin by selecting it as an allow list or deny list with allow or deny for the rule. If it’s a deny, the appropriate action in this example should be a 403 error.
  • Cloud Armor
    • In addition to Layer 3 and Layer 4 security, Google Cloud Armor supports Layer 7 application rules. For example, predefined rules are provided for cross-site scripting (XSS) and SQL injection attacks. Google Cloud Armor provides a rules language for filtering request traffic. As an example, consider the first expression on this slide: inIpRange(origin.ip, ‘9.9.9.0/24). In this case, the expression returns true if the origin IP in a request is within the 9.9.9.0/24 range.
  • Cloud Armor
    • The second line, request.headers['cookie'].contains('80=BLAH'), returns true if the cookie 80 with value BLAH exists in the request header, and the third line is true if the origin region code is AU. The expressions can be combined logically with logical AND and OR. The expressions are all assigned to an allow or deny rule that is then applied to incoming traffic.
  • Encryption
    • Google Cloud encrypts customer data stored at rest by default, with no additional action required from users. A Data Encryption Key (or DEK) using AES-256 symmetric key is used, and the key itself is encrypted by Google using a Key Encryption Key (KEK). This is so that the DEK can be stored local to the encrypted data for fast decryption with no visible performance impact to the user. To protect the KEKs, they are stored in Cloud KMS. The keys are rotated periodically and automatically for added security.
  • Encryption
    • This diagram shows a simple App Engine application that uses Cloud Storage. The data is encrypted using AES-256 using a DEK and decrypted transparently to the application when the data is read.
  • Now, for compliance reasons, you may need to manage your own encryption keys rather than the automatically generated keys as just discussed. In this scenario, you can use Cloud Key Management Service (or Cloud KMS) to generate what are known as Customer Managed Encryption Keys (CMEK). These keys are stored in Cloud KMS for direct use by Cloud services. You can manually create the key using a dialog similar to the one shown here and specify the rotation frequency, which defaults to 90 days. The keys you create can then be used when creating storage resources such as disks or buckets.
  • Customer-Supplied Keys
    • When you’re required to generate your own encryption key or manage it on-premises, Google Cloud supports Customer Supplied Encryption Keys (CSEK). Those keys are kept on-premises and not in Google Cloud. The keys are provided as part of API service calls, and Google only keeps the key in memory and uses it to decrypt a single payload or block of returned data. Currently, Customer Supplied Encryption Keys can be used with Cloud Storage and Compute Engine.
  • Data Loss Prevention API
    • You should also consider the Data Loss Prevention API to protect sensitive data by finding it and redacting it. Cloud DLP provides fast, scalable classification and redaction for sensitive data elements like credit card numbers, names, social security numbers, US and selected international identifier numbers, phone numbers, and Google Cloud credentials.
  • Data Loss Prevention API
    • Cloud DLP classifies this data using more than 90 predefined detectors to identify patterns, formats, and checksums, and even understands contextual clues. Some of these are shown on the right. You can optionally redact data as well, using techniques like masking, secure hashing, tokenization, bucketing, and format-preserving encryption.
    • This diagram illustrates a custom VPC network with two subnets in the US. Maybe us-central1 is our primary region and us-east1 is our backup region. The firewall rules allow HTTPS ingress from the internal and SSH from known sources. Otherwise, all other incoming traffic is disabled by the implied deny all ingress firewall rule that every VPC network has.