Cloud & Virtualization

Cards (41)

  • Virtualization
    • Virtualization usually refers to running software on a computer to create a virtual machine (VM), an environment that imitates a physical machine.
    • You can install and run an operating system in this environment as if it were its own physical computer. In a networking sense, a single physical machine could house multiple virtual machines, each running a different networking task, such as DHCP, DNS, firewall, VPN, and so on.
  • Virtualization
    • Hardware underutilization means that a server (or a desktop PC, for that matter) isn’t being used anywhere near its capability.
    • Well, we need the servers to do more with the hardware they have. Luckily for us, PC processors, using roughly the same set of functions that enables your CPU to multitask many applications at once, can instead multitask a bunch of virtual machines
    • Virtualization enables one machine—called the host—to run multiple operating systems simultaneously. This is called hardware virtualization.
  • Virtualization
    • A virtual machine is a special program, running in protected space (just like programs run in protected space), that enables all the features of the server (RAM, CPU, drive space, peripherals) to run as though they are each separate computers (ergo, virtual machines).
    • The program used to create, run, and maintain the VMs is called a hypervisor.
    • You can install and run an operating system in this guest virtual environment as if it were its own physical computer. The VM is stored as a set of files that the hypervisor can load and save with changes.
  • Virtualization: Hypervisors
    • There are two types of hypervisors
    • A Type 1 hypervisor is installed on the system in lieu of an operating system. A Type 2 hypervisor is installed on top of the operating system.
    • Type 1 hypervisors are the most common for hardworking VMs. Type 2 hypervisors are popular for learning about virtualization and don’t require a dedicated system.
  • Abstraction
    • Modern networks separate form from function on many levels, actions that are defined by the jargon word abstraction
    • To abstract means to remove or separate. In networking terms, it means to take one aspect of a device or process and separate it from that device or process.
    • You’ve seen this in hardware virtualization, with an operating system separated from hardware, being placed in a virtual machine. This is abstraction at work. The OS is abstracted (removed a step) from the bare metal, the computing hardware.
  • Abstraction
    • The Internet Protocol is an abstraction that spares most applications from needing to know or care how their messages get from one place to another—and Ethernet is also an abstraction that spares IP from needing to know how to send those messages out on the wire
    • Abstraction—and particularly the process of abstracting something complicated into layers that let us focus on a few issues at a time—plays an important role in how network people manage complexity.
  • Virtualizations
    • Virtual memory replaces direct access to physical RAM with a layer of software that enables the OS to move a running program’s memory to the swap file and back dynamically without the program’s awareness.
    • Hardware virtualization replaces all the physical devices an OS needs with software versions that the hypervisor controls. The hypervisor can use this control to redistribute the host’s hardware resources among running VMs on-the-fly.
  • Virtualizations: Networks
    • Network virtualization, creates software versions of networking functions—such as DNS, firewalls, intrusion detection systems—that used to require dedicated network hardware boxes
    • The network operating system interacts with network functions just as it always did. Those functions—now software—in turn work with a hypervisor that interfaces with real hardware.
    • A hypervisor can use a virtual network, in the form of a virtual switch, to enable multiple VMs it hosts to communicate without the frames ever leaving the host machine.
  • Virtualizations: Containers
    • With containerization, an operating system creates a self-contained environment for an application. This environment includes all the software the application needs to run as if it were the only application running on a clean operating system.
    • Containers use fewer resources than VMs, but they’re stuck with the same OS as their host and are less isolated from each other
  • Virtualization: Flexibility
    Reroute an Input or Output
    • When you create a new virtual machine from scratch, you’ll need to install an OS in it. On boring old physical computers, you stick the installation media in a physical USB slot or optical drive.
    • A VM has no such physical ports or drives. It has virtual equivalents that you can route input to from either the host’s physical slot/drive or from a disk image file saved on the host.
  • Virtualization: Flexibility
    Relocate Components
    • Virtualization makes it possible to move systems and their parts around in all kinds of creative ways.
    • The VM is just a file. If it needs a faster processor or more RAM, just move the file to an upgraded machine running a compatible hypervisor.
    • Virtualization can also separate components that normally appear together. You can, for example, collect the hard drives that would normally live in each server case and relocate them to a small number of powerful centralized storage servers. This leads to the next point.
  • Virtualization: Flexibility
    Divide and Combine Resources
    • Virtualization enables you to pool resources and reallocate them as needed. Virtualizing a component such as RAM, a CPU, or a hard drive makes it possible to divide that resource up and give each VM its own fraction.
    • It also means the inverse. A virtualized hard drive backed by multiple physical drives, for example, can function as a single drive larger than anything you could buy off the shelf
  • Virtualization: Flexibility
    Scaling
    • The benefits of aggressive virtualization become readily apparent at scale.
    • Powerful physical servers can host many VMs of different sizes (dozens if they are all small), making it possible to host 100,000 equivalent VMs using a fraction of these per-physical-machine resources
    • A data center operator can analyze resource use and move VMs around to balance hardware use around the clock more efficiently or to power down idle systems. Even tiny tweaks can save millions of dollars a year at data center scale.
  • Cloud Computing
    • Cloud computing moves the specialized machines used in classic and virtualized networking “out there somewhere,” using the Internet to connect an organization to other organizations to manage aspects of the network
    • The cloud is more like a cafeteria of computing and networking resources—an à la carte data center enhanced by layers of powerful services and software.
  • Infrastructure as Code
    • Infrastructure as code (IaC), in a nutshell, abstracts the infrastructure of an application or service into a set of configuration files or scripts.
    • First, we’ll look at the hurdles faced by the organizations. Second, we’ll explore automation options. We’ll finish with a concept called orchestration that helps organizations apply software-driven flexibility to problems.
  • IaC: Automation
    • Over time, all-too-human mistakes and miscommunication tend to snowball until software that works fine on one server is a source of hard-to-troubleshoot bugs on another.
    • The broad solution to this problem is to replace tasks people do manually with automation—using code to set up (provision) and maintain systems (installing software updates, for example) in a consistent manner, with no mistyped commands.
  • IaC: Benefits
    • The ability to create identical copies of the necessary infrastructure makes it easy for people working on the application to create a development environment or test environment
    • You can save the code/scripts for creating and configuring the infrastructure in a source/version control system alongside the rest of the application’s code, making it easier to ensure the infrastructure and application are compatible.
    • You can also carefully review changes before rolling them out, helping catch configuration mistakes before they are applied to real, working systems.
  • Orchestration
    • Orchestration combines automated processes into bigger, multifaceted processes called pipelines or workflows. Orchestration streamlines development, testing, deploying, or maintaining, depending on the specific workflow.
    • continuous integration/continuous deployment (CI/CD). When developers check in changes to the application’s code, they trigger an automatic multistep pipeline that can build the application, set up a temporary copy of the infrastructure the app needs, and run a bunch of automated checks to ensure it works.
  • Virtual Networking
    • Virtualization is transforming networking, so that software performs the classic network functions E.g VLANs
    • VLAN capable switches use a software layer to create and modify VLANs.
  • Virtual Networking Inside the VM Host
    • A hulking server hosting a few dozen VMs can have the same kind of complex networking requirements you’d have on a physical network that hosts a few dozen important servers.
    • Some of the VMs may need to share a private network to collaborate; some may need to serve requests from the open Internet; others may need to be isolated from the network. And the server needs to rapidly reconfigure the network as VMs come and go.
  • Virtual Networking Inside the VM Host
    • Virtual networking inside the host is one way to meet these needs. Hypervisors tend to come with their own basic networking capabilities such as built-in switching.
    • If you have networking needs that the built-in features don’t cover, you can run other network functions as VMs (on the same hypervisor as any VMs they support, when you can).
  • Virtual Switches
    • You have three virtual machines and you need these VMs to have access to the Internet. Therefore, you need to give them all legitimate IP addresses. The oldest and simplest way is to bridge the NIC
    • Each bridge is a software connection that passes traffic from the real NIC to a virtual one. This bridge works at Layer 2 of the OSI model, so each virtual network interface card (vNIC) gets a legitimate, unique MAC address.
  • Virtual Switches
    • The technology at work here is the virtual switch (or vSwitch): software that does the same Layer 2 switching a hardware switch does, including features like VLANs. The big difference is what it means to “plug” into the virtual switch.
    • When the NICs are bridged, the VMs and the host’s NIC are all connected to the virtual switch. In this mode, think of the physical NIC as the uplink port on a hardware switch. But just like physical networks, we need more than just Layer 2 switching. That’s where virtual routers and firewalls come in.
  • Distributed Switches
    • Virtual switches normally use a Web interface for configuration, just like a regular switch.
    • The centralized installation, configuration, and handling of every switch in a network is known as distributed switching. Every hypervisor has some form of central configuration of critical issues for switches, such as VLAN assignment and trunking.
  • Virtual Routers and Firewalls
    • Virtual routers let us dynamically reconfigure networks. This lets the network keep up when VMs are moved from host to host to meet demand or improve resource use. The virtual routers are just VMs like any other; we can allocate more resources to them as traffic grows, instead of having to buy bigger, better physical routers.
    • When it comes to firewalls, the same rules apply: virtual firewalls can protect servers where inserting a physical one would be hard, costly, or impossible.
  • Network Function Virtualization
    • NFV is a network architecture (a collection of patterns that generally—not specifically—describe how to design a network that achieves a specific set of goals), not an actual feature that you can implement.
    • The first (and biggest) piece of an NFV architecture is the network function virtualization infrastructure (NFVI): the hardware (x86-64 servers, storage arrays, and switches) and software (like hypervisors and controllers) that form the foundation of a virtual network.
  • Network Function Virtualization
    • The NVFI is where network functions (firewalls, load balancers, routers, and so on) run that are appropriately called virtualized network functions (VNFs).
    • A VNF can be composed of one or more interconnected VMs (or containers)—called VNF components (VNFCs)—that collectively work as a VNF such as a VPN concentrator or firewall.
  • Network Function Virtualization versus Software-Defined Networking
    • Traditionally, hardware routers and switches were designed with two closely integrated parts: a control plane that decides how to move traffic, and a data plane that executes those decisions.
    • The control plane on a router is what you log into to configure the router and is what runs the software that actually speaks routing protocols like OSPF and BGP and builds the routing table that it gives to the data plane.
    • The router’s data plane reads incoming packets and uses the routing table to send them to their destination.
  • Software-defined networking (SDN)
    • Software-defined networking (SDN) cuts the control layer of individual devices out of the picture and lets an all-knowing program called a network controller dictate how both physical and virtual network components move traffic through the network
    • SDN requires components (think routers, switches, firewalls) with a data layer (also called the infrastructure layer) designed to take instructions from the network controller instead of their own control plane.
  • Software-defined networking (SDN)
    • While it’s important enough that SDN allows for a master controller large networks may also distribute the controller’s workload over multiple servers, the revolutionary idea behind SDN is that the network controller is programmable. Programmers can write code (or use software designed by others) that controls how the entire network behaves.
    • To manage the complexity that comes with separating the infrastructure and control planes, SDN introduces a few new planes for us to keep straight. T
  • Software-defined networking (SDN)
    • The most important are the management plane and the application plane. The management (or administration) layer is responsible for setting up network devices to get their marching orders from the right controller.
    • The application layer, which sits on top of this entire framework, is where the network-behavior-controlling software runs. Applications that run here often do jobs like load balancing, optimizing the flow of traffic, monitoring, enforcing security policy, threat protection, and so on.
  • SDN
    • Even though the term isn’t in the name, SDN is yet another example of virtualization.
    • The control plane is virtualized to outsource its role to the network controller. In NFV, in contrast, the entire network function is virtualized.
    • NFV and SDN are separate, complementary ways to manage networks with software. You can use the two approaches separately, but an NFV architecture usually also takes advantage of SDN.
  • Account Security
    • When you (or your organization) sign up with a cloud provider, you get at least one extremely powerful account
    • That someone could use to steal your organization’s data, delete all of its infrastructure, or rack up a huge bill mining cryptocurrency. Just like with any other system, the principle of least privilege applies.
    • Cloud providers have authorization systems that enable you to set up separate credentials with explicit, limited permissions for the humans, servers, apps, tools, and services that need to manage your cloud account’s resources
  • Privacy
    • The cloud provider (or someone working for it) may have access to your data. These providers have some rules and practices in place to limit the likelihood that someone on their end goes browsing through your data, but those assurances may not be enough if your organization has sensitive data.
    • In this case, your organization might choose to use the cloud provider but encrypt all the data it stores there, or use a private cloud model (entirely, or just for the sensitive resources).
  • Multitenancy
    • Multitenancy is the ability to support multiple customers on the same infrastructure at the same time
    • And then there’s the question of malicious neighbors (or perhaps just nice neighbors who have been hacked). Hypervisors do a good job of isolating one VM from another, but this isn’t perfect
    • In the latter case, the regulations may require you to pay a little more for dedicated instances
  • Intrusion
    • Just like an internal network, outright intrusion is a risk in the cloud. Even if everyone in your organization is trained about how to keep the LAN secure, integrating cloud resources with your LAN creates new risks.
    • This is especially true if your organization focuses on securing the network perimeter and defaults to trusting every device inside the network.
    • Someone who compromises the leastsecure device on either end may be able to use it as a jumpingoff point to attack devices on the other end.
  • Logging
    • All kinds of devices and servers produce many logs. These logs can be great when you’re debugging or looking into a security incident, but they may also contain sensitive information.
    • Your organization may, for example, need to set up policies and invest some effort to make sure the logs you need are collected from the devices and services that produce them and retained without violating customer privacy
  • Desktop as a Service
    • Desktop as a service (DaaS) enables you to move user workstations into the cloud and manage them as flexibly as other cloud infrastructure
    • Desktop virtualization replaces direct access to a system’s local desktop environment with a client that can access an OS running in a VM.
    • You could technically run that VM on the same device, but benefits like flexible management come from centralizing the desktop VMs on a smaller number of servers—a pattern called virtual desktop infrastructure (VDI).
  • Desktop as a Service
    • This server/client VDI pattern is roughly what cloud providers bundle up and sell as DaaS.
    • For example, DaaS makes it possible to onboard new employees even when there’s no space on the internal servers. The ability to host virtual desktops closer to users all around the world, even when you don’t have an office nearby, can also help them have a smooth experience because of the reduction in lag
  • Virtual Private Network
    • The most convenient way to connect a network to the public cloud is through a VPN.
    • A VPN creates an encrypted tunnel between two networks over another, less-secure network. A site-to-site VPN can establish a permanent tunnel (often using IPsec) between a local network and a virtual network in the cloud.
    • VPN tunnels are relativity simple to set up because they use offthe-shelf technology like IPsec. This makes them easy to integrate with existing site-to-site WAN infrastructure or even an SD-WAN service