Network

Cards (44)

    • As part of the load balancer configuration, you can enable Cloud CDN to provide lower latency and decrease network egress, which ultimately decreases your networking costs.
  • Google runs a worldwide network that connects regions all over the world. You can use this high-bandwidth infrastructure to design your cloud networks to meet your requirements such as location, number of users, scalability, fault tolerance, and latency.
  • This map represents Google Cloud’s reach. On a high level, Google Cloud consists of regions, which are the icons in blue; points of presence, or PoPs, which are the dots in grey; a global private network, which is represented by the blue lines; and services.
    • A region is a specific geographical location where you can run your resources. This map shows several regions that are currently operating, as well as future regions and their zones. As of this recording, there are 21 regions and 64 zones.
  • The PoPs are where Google’s network is connected to the rest of the internet. Google Cloud can bring its traffic closer to its peers because it operates an extensive global network of interconnection points. This reduces costs and provides users with a better experience.
    • The network connects regions and PoPs and is composed of a global network of fiber optic cables with several submarine cable investments.
  • In Google Cloud, VPC networks are global, and you can either create auto mode networks that have one subnet per region or create your own custom mode network where you get to specify which region to create a subnet in. Resources across regions can communicate using their internal IP addresses without any added interconnect.
    • For example, the diagram below shows two subnets in different regions with a server on each subnet. They can communicate with each other using their internal IP addresses because they are connected to the same VPC network.
  • Selecting which regions to create subnets in depends on your requirements:
    • For example, if you are a global company, you will most likely create subnetworks in regions across the world. If users are within a particular region, it may be suitable to select just one subnet in a region closest to these users and maybe a backup region close by. Also, you can have multiple networks per project. These networks are just a collection of regional subnetworks or subnets.
    • To create custom subnets you specify the region and the internal IP address range, as illustrated in the screenshots below. The IP ranges of these subnets don't need to be derived from a single CIDR block, but they cannot overlap with other subnets of the same VPC network. This applies to primary and secondary ranges. Secondary ranges allow you to define alias IP addresses.
    • Also, you can expand the primary IP address space of any subnets without any workload shutdown or downtime. Once you defined your subnets, machines in the same VPC network can communicate with each other through their internal IP address regardless of the subnet they are connected to.
  • Now, a single VM can have multiple network interfaces connecting to different VPC networks. This graphic illustrates an example of a Compute Engine instance connected to four different networks covering production, test, infra, and an outbound network.
    • A VM must have at least one network interface but can have up to 8, depending on the instance type and the number of vCPUs. A general rule is that with more vCPUs, more network interfaces are possible. All of the network interfaces must be created when the instance is created, and each interface must be attached to a different network.
  • Shared VPC:
    • Allows an organization to connect resources from multiple projects of a single organization to a common VPC network. This allows the resources to communicate with each other securely and efficiently using internal IPs from that network.
    • This graphic shows a scenario where a shared VPC is used by three other projects, namely service projects A, B, and C. Each of these projects has a VM instance that is attached to the Shared VPC.
  • Shared VPC
    • Is a centralized approach to multi-project networking, because security and network policy occurs in a single designated VPC network. This allows for network administrator rights to be removed from developers so they can focus on what they do best. Meanwhile, organization network administrators maintain control of resources such as subnets, firewall rules, and routes while delegating the control of creating resources such as instances to service project administrators or developers
  • Global load balancers
    • Provide access to services deployed in multiple regions. For example, the load balancer shown on this slide has a backend with two instance groups deployed in different regions. Cloud Load Balancing is used to distribute the load among these instance groups.
    • Global load balancing is supported by HTTP load balancers and TCP and SSL proxies in Google Cloud. For an HTTP load balancer, a global anycast IP address can be used, simplifying DNS lookup. By default, requests are routed to the region closest to the requestor.
    • For services deployed in a single region, use a regional load balancer. This graphic illustrates resources deployed within a single region and Cloud Load Balancing routing requests to those resources. Regional load balancers support HTTP(S) and any TCP or UDP port.
  • If your load balancers have public IP addresses, traffic will likely be traversing the internet. I recommend securing this traffic with SSL, which is available for HTTP and TCP load balancers as shown in the screenshot on the right.
    • You can use either self-managed SSL certificates or Google-managed SSL certificates when using SSL.
  • Now, if you are using HTTP(S) load balancing, you should leverage Cloud CDN to achieve lower latency and decreased egress costs. You can enable Cloud CDN by simply checking a box when configuring an HTTP(S) global load balancer. Cloud CDN caches content across the world using Google Cloud’s edge-caching locations. This means that content is cached closest to the users making the requests.
    • The data that is cached can be from a variety of sources, including Compute Engine instances, GKE pods, or Cloud Storage buckets.
  • Load Balancers:
    • At a high level, load balancers can work with internal or external IP addresses. We refer to external IP addresses as internet-facing. The load balancers can be regional or multi-regional, and finally, they support different traffic types: HTTP, TCP, and UDP.
    • HTTP(S) load balancing is a layer 7 load balancer. Support is provided for HTTP and HTTPS including HTTP/2. The load balancing supports both internet-facing and internal load balancing, as well as regional or global.
    • TCP load balancing provides layer 4 balancing or proxy for applications that require the TCP/SSL protocol. You can configure a TCP load balancer or a TCP or SSL proxy. TCP load balancing supports both internet-facing and internal load balancing as well as regional and global.
    • UDP load balancing is for those applications that rely on UDP as a protocol. The UDP load balancer supports both internet-facing and internal load balancing but only regional traffic.
  • Network Intelligence Center
    • The Network Intelligence Center is a Google Cloud service that can be used to visualize your VPC networks topology and test network connectivity.
    • This facility is extremely valuable for confirming the network topology when configuring a network or when performing diagnostics. The right-hand graphic shows the configuration of a connectivity test between a source and destination along with a protocol and port.
  • Network Intelligence Center:
    • The following tests can be performed:
    • Between source and destination endpoints in your Virtual Private Cloud (VPC) network
    • From your VPC network to and from the internet
    • From your VPC network to and from your on-premises network
  • VPC Peering:
    • If you’re trying to connect two VPC networks, you might want to consider VPC peering. VPC Peering allows private RFC 1918 connectivity across two VPC networks, regardless of whether they belong to the same project or the same organization. Now, remember that each VPC network will have firewall rules that define what traffic is allowed or denied between the networks.
  • VPC Peering:
    • You might notice that the subnet ranges do not overlap. This is a requirement for a connection to be established. Speaking of the connection, network administrators for each VPC network must configure a VPC peering request for a connection to be established.
  • Cloud VPN
    • Securely connects your on-premises network to your Google Cloud VPC network through an IPsec VPN tunnel. Traffic traveling between the two networks is encrypted by one VPN gateway, then decrypted by the other VPN gateway. This protects your data as it travels over the public internet, and that’s why Cloud VPN is useful for low-volume data connections.
  • Cloud VPN:
    • As a managed service, Cloud VPN provides an SLA of 99.9% monthly uptime for the Classic VPN configuration, and 99.99% monthly uptime for the High-availability (HA) VPN configuration. The Classic VPN gateways have a single interface and a single external IP address whereas high-availability VPN gateways have two interfaces with two external IP addresses (one for each gateway).
    • The choice of VPN gateway comes down to your SLA requirement and routing options.
  • Cloud VPN
    • Supports site-to-site VPN, static routes and dynamic routes using Cloud Router and IKEv1 and IKEv2 ciphers. However, static routes are only supported by Classic VPN.
    • Also, Cloud VPN doesn't support use cases where client computers need to “dial in” to a VPN using client VPN software.
  • Cloud VPN Topology:
    • These resources are able to communicate using their internal IP addresses because routing within a network is automatically configured (assuming that firewall rules allow the communication).
    • Now, in order to connect to your on-premises network and its resources, you need to configure your Cloud VPN gateway, on-premises VPN gateway, and two VPN tunnels. The Cloud VPN gateway is a regional resource that uses a regional external IP address.
  • Cloud VPN Topology:
    You on-premises VPN gateway can be a physical device in your data center or a physical or software-based VPN offering in another cloud provider's network. This VPN gateway also has an external IP address.
    • A VPN tunnel then connects your VPN gateways and serves as the virtual medium through which encrypted traffic is passed. In order to create a connection between two VPN gateways, you must establish two VPN tunnels. Each tunnel defines the connection from the perspective of its gateway, and traffic can only pass when the pair of tunnels is established.
  • Cloud VPN:
    • Now, one thing to remember when using Cloud VPN is that the maximum transmission unit (MTU) for your on-premises VPN gateway cannot be greater than 1460 bytes. This is because of the encryption and encapsulation of packets
    • In addition to Classic VPN, Google Cloud also offers a second type of Cloud VPN gateway, HA VPN.
  • HA VPN is a high availability Cloud VPN solution that lets you securely connect your on-premises network to your Virtual Private Cloud (VPC) network through an IPsec VPN connection in a single region. HA VPN provides an SLA of 99.99% service availability. To guarantee a 99.99% availability SLA for HA VPN connections, you must properly configure two or four tunnels from your HA VPN gateway to your peer VPN gateway or to another HA VPN gateway.
  • HA VPN:
    • When you create an HA VPN gateway, Google Cloud automatically chooses two external IP addresses, one for each of its fixed number of two interfaces. Each IP address is automatically chosen from a unique address pool to support high availability.
    • Each of the HA VPN gateway interfaces supports multiple tunnels. You can also create multiple HA VPN gateways. When you delete the HA VPN gateway, Google Cloud releases the IP addresses for reuse.
  • HA VPN:
    • You can configure an HA VPN gateway with only one active interface and one external IP address; however, this configuration does not provide a 99.99% service availability SLA. VPN tunnels connected to HA VPN gateways must use dynamic (BGP) routing. Depending on the way that you configure route priorities for HA VPN tunnels, you can create an active/active or active/passive routing configuration.
  • HA VPN supports site-to-site VPN in one of the following recommended topologies or configuration scenarios:
    • An HA VPN gateway to peer VPN devices
    • An HA VPN gateway to an Amazon Web Services (AWS) virtual private gateway
    • Two HA VPN gateways connected to each other
    • There are three typical peer gateway configurations for HA VPN. An HA VPN gateway to two separate peer VPN devices, each with its own IP address, an HA VPN gateway to one peer VPN device that uses two separate IP addresses and an HA VPN gateway to one peer VPN device that uses one IP address
    • In this topology, one HA VPN gateway connects to two peer devices. Each peer device has one interface and one external IP address. The HA VPN gateway uses two tunnels, one tunnel to each peer device. If your peer-side gateway is hardware-based, having a second peer-side gateway provides redundancy and failover on that side of the connection
    • A second physical gateway lets you take one of the gateways offline for software upgrades or other scheduled maintenance. It also protects you if there is a failure in one of the devices.
    • When configuring an HA VPN external VPN gateway to Amazon Web Services (AWS), you can use either a transit gateway or a virtual private gateway. Only the transit gateway supports equal-cost multipath (ECMP) routing. When enabled, ECMP equally distributes traffic across active tunnels. Let’s walk through an example.
  • In this topology, there are three major gateway components to set up for this configuration. An HA VPN gateway in Google Cloud with two interfaces, two AWS virtual private gateways, which connect to your HA VPN gateway, and an external VPN gateway resource in Google Cloud that represents your AWS virtual private gateway. This resource provides information to Google Cloud about your AWS gateway.
    • The supported AWS configuration uses a total of four tunnels. Two tunnels from one AWS virtual private gateway to one interface of the HA VPN gateway, and two tunnels from the other AWS virtual private gateway to the other interface of the HA VPN gateway.
    • You can connect two Google Cloud VPC networks together by using an HA VPN gateway in each network. The configuration shown provides 99.99% availability. From the perspective of each HA VPN gateway you create two tunnels. You connect interface 0 on one HA VPN gateway to interface 0 on the other HA VPN, and interface 1 on one HA VPN gateway to interface 1 on the other HA VPN.
  • Cloud VPN supports both static and dynamic routes. In order to use dynamic routes, you need to configure Cloud Routers. A Cloud Router can manage routes for a Cloud VPN tunnel using Border Gateway Protocol, or BGP. This routing method allows for routes to be updated and exchanged without changing the tunnel configuration.
    • This allows for new subnets like staging in the VPC network and Rack 30 in the peer network to be seamlessly advertised between networks.
  • If you need a dedicated high speed connection between networks, consider using Cloud Interconnect. Cloud Interconnect has two options for extending on-premises networks: Dedicated Interconnect and Partner Interconnect.
    • Dedicated Interconnect provides a direct connection to a colocation facility. The colocation facility must support either 10 Gbps or 100 Gbps circuits, and a dedicated connection can bundle up to eight 10 Gbs connections or two 100 Gbps for a maximum of 200 Gbps.