Modeling & Design

Cards (106)

  • Secure Design Patterns:
    • When you are designing a system, you should keep certain security principles and methodologies in mind. Not all principles may apply to your system. But it is important for you to consider them to ensure that they hold true if they apply to you.
  • Zero trust
    • A common approach to system design, and security compliance, is “trust, but verify, or zero trust, which is to assume the best outcome for an operation (such as a device joining a network, or a client calling an API) and then perform a verification of the trust relationship secondarily. In a zero trust environment, the system ignores (or never establishes) any prior trust relationship and instead verifies everything before deciding to establish a trust relationship (which may then be temporary)
  • Zero trust
    • Also known as complete mediation, this concept looks amazingly simple on paper: ensure that the rights to access an operation 18 19 are checked every time an object is accessed, and that the rights for that access operation are checked beforehand. In other words, you must verify that an actor has the proper rights to access an object every time that access is requested.
  • Design by contract
    • Design by contract is related to zero trust, and assumes that whenever a client calls a server, the input coming from that client will be of a certain fixed format and will not deviate from that contract.
    • It is similar to a lock-and-key paradigm. Your lock accepts only the correct key and trusts nothing else.Google has significantly reduced the amount of cross-site scripting (XSS) flaws in applications by using a library of inherently safe API calls—by design. Design by contract addresses zero trust by ensuring that every interaction follows a fixed protocol.
  • Least privilege
    • This principle means that an operation should run using only the most restrictive privilege level that still enables the operation to succeed. In other words, in all layers and in all mechanisms, make sure that your design constricts the operator to the minimum level of access required to accomplish an individual operation, and nothing more.
  • Least privilege
    • If least privilege is not followed, a vulnerability in an application might offer full access to the underlying operating system, and with it all the consequences of a privileged user having unfettered access to your system and your assets. This principle applies for every system that maintains an authorization context (e.g., an operating system, an application, databases, etc.)
  • Defense in depth
    • Defense in depth uses a multifaceted and layered approach to defend a system and its assets. When thinking about defense of your system, think about the things you want to protect—assets—and how an attacker might try to access them. Consider what controls you might put in place to limit or prevent access by an adversary (but allow access by a properly authorized actor).
    • You might consider parallel or overlapping layers of controls to slow down the attacker; alternatively you might consider implementing features that confuse or actively deter an adversary.
  • Defense In Depth:
    • Examples of defense in depth applied to computer systems include the following:
    • Defending a specific workstation with locks, guards, cameras, and air-gapping
    • Introducing a bastion host (or firewall) between the system and the public internet, then an endpoint agent in the system itself
    • Using multifactor authentication to supplement a password system for authentication, with a time delay that raises exponentially between unsuccessful attempts
    • Deploying a honeypot and fake database layer with intentionally priority-limited authentication validation functions
  • Defense In Depth:
    • Any additional factor that acts as a “bump in the road” and makes an attack costlier in terms of complexity, money, and/or time is a successful layer in your defense in depth. This way of evaluating defense-in-depth measures is related to risk management—defense in depth does not always mean defense at all costs.
    • A balancing act occurs between deciding how much to spend to secure assets versus the perceived value of those assets, which falls into scope of risk management.
  • Keeping things simple
    • Keeping things simple is about avoiding overengineering your system. With complexity comes the increased potential for instability, challenges in maintenance, and other aspects of system operation, and a potential for ineffectual security controls.
  • Keeping things simple
    • Care must be taken to avoid oversimplification as well (as in dropping or overlooking important details). Often that happens in input validation, as we assume (correctly or not) that an upstream data generator will always supply valid and safe data, and avoid (incorrectly) our own input validation in an effort to simplify things. At the end of the day, a clean, simple design over an overengineered one will often provide security advantages over time, and should be given preference
  • No secret sauce
    • Do not rely on obscurity as a means of security. Your system design should be resilient to attack even if every single detail of its implementation is known and published. Notice, this doesn’t mean you need to publish it, and the data on which the implementation operates must remain protected—it just means you should assume that every detail is known, and not rely on any of it being kept secret as a way to protect your assets. If you intend to protect an asset, use the correct control—encryption or hashing; do not hope an actor will fail to identify or discover your secrets!
  • Separation of privilege
    • Also referred to as separation of duties, this principle means segregating access to functionality or data within your system so one actor does not hold all rights. Related concepts include maker/checker, where one user (or process) may request an operation to occur and set the parameters, but another user or process is required to authorize the transaction to proceed.
    • This means a single entity cannot perform malicious activities 24 unimpeded or without the opportunity for oversight, and raises the bar for nefarious actions to occur.
  • Consider the human factor
    • Human users have been referred to as the weakest link in any system, so the concept of psychological acceptability must be a basic design constraint. Users who are frustrated by strong security measures will inevitably try to find ways around them.
    • When developing a secure system, it is crucial to decide just how much security will be acceptable to the user. There’s a reason we have two-factor authentication and not sixteenfactor authentication.
  • Consider the human factor
    • Put too many hurdles between a user and the system, and one of these situations will occur:
    • The user stops using the system.
    • The user finds workarounds to bypass the security measures.
    • The powers that be stop supporting the decision for security because it impairs productivity
  • Effective logging
    • Security is not only preventing bad things from happening, but also about being aware that something happened and, to the extent possible, what happened. The capability to see what happened comes from being able to effectively log events. But what constitutes effective logging? From a security point of view, a security analyst needs to be able to answer three questions:
    • Who performed a specific action to cause an event to be recorded?
    • When was the action performed or event recorded?
    • What functionality or data was accessed by the process or user?
  • Nonrepudiation, which is closely related to integrity, means having a set of transactions indicating who did what, with the record of each transaction maintaining integrity as a property. With this concept, it is impossible for an actor to claim they did not perform a specific action.
  • Effective Logging:
    As important as it is to know what to log and how to protect it, knowing what not to log is also crucial. In particular:
    • Personally identifiable information (PII) should never be logged in plain text, in order to protect the privacy of user data.
    • Sensitive content that is part of API or function calls should never be logged.
    • Clear-text versions of encrypted content likewise should not be logged.
    • Cryptographic secrets, such as a password to a system or a key used to decrypt data, should not be logged.
  • Effective Logging:
    • Using common sense is important here, but note that keeping these logs from being integrated into code is an ongoing battle against the needs of development (mostly debugging). It is important to make it clear to development teams that it is unacceptable to have switches in code that control whether sensitive content should be logged for debugging purposes. Deployable, production-ready code should not contain logging capabilities for sensitive information.
  • Fail secure
    • When a system encounters an error condition, this principle means not revealing too much information to a potential adversary (such as in logs or user error messages) and not simply granting access incorrectly, such as when the failure is in the authentication subsystem.
  • Fail Secure:
    • But it is important to understand that there is a significant difference between fail secure and fail safe. Failing while maintaining safety may contradict the condition of failing securely, and will need to be reconciled in the system design. Which one is appropriate in a given situation, of course, depends on the particulars of the situation. At the end of the day, failing secure means that if a component or logic in the system falters, the result is still a secure one.
  • Build in, not bolt on:
    • Security, privacy, and safety should be fundamental properties of the system, and any security features should be included in the system from the beginning. Security, like privacy or safety, should not be considered an afterthought or rely solely or primarily on external system components to be present. A good example of this pattern is the implementation of secure communications; the system must support this natively—i.e., should be designed to support Transport Layer Security (TLS) or a similar method for preserving confidentiality of data in transit.
  • Build in, not bolt on
    • Relying on the user to install specialized hardware systems to enable end-to-end communications security means that if the user does not do so, the communications will be unprotected and potentially accessible to malicious actors. Do not assume that users will take action on your behalf when it comes to the security of your system.
  • Modeling Systems:
    • System modeling (creating abstractions or representations of a system) is an important first step in the threat modeling process. The information you gather from the system model provides the input for analysis during the threat modeling activity.
    • Model or modeling to mean an abstraction or representation of a system, its components, and interactions
  • Why Model Systems:
    • For security purposes, we model software and hardware systems, in particular, because it enables us to subject the systems to theoretical stress, understand how that stress will impact the systems before they are implemented, and see the systems holistically so we can focus on vulnerability details as needed.
  • System Modeling Types:
    • System can be complex, So those who want to make sure their system analysis is both practical and effective need to reduce the complexity and amount of data to consider for analysis and maintain the right amount of information.
    • This is where system modeling, or an abstraction of a system describing its salient parts and attributes, comes to the rescue. Having a good abstraction of the system you want to analyze will give you enough of the right information to make informed security and design decisions.
  • System Model Types:
    Data flow diagrams:
    • Data flow diagrams (DFDs) describe the flow of data among components in a system and the properties of each component and flow. DFDs are the most used form of system models in threat modeling and are supported natively by many drawing packages; shapes in DFDs are also easy for humans to draw by hand.
  • System Model Types:
    Sequence diagrams
    • These are activity diagrams in Unified Modeling Language (UML) that describe the interactions of system components in an ordered fashion.
    • Sequence diagrams can help identify threats against a system because they allow a designer to understand the state of the system over time. This allows you to see how the system’s properties, and any assumptions or expectations about them, change over the course of its operation.
  • System Model Types:
    Process flow diagrams
    • Process flow diagrams (PFDs) highlight the operational flow through actions among components in a system.
    Attack trees
    • Attack trees depict the steps along a path that an attacker might try as part of reaching their goal to perform actions with nefarious intent.
  • System Model Types:
    Fishbone diagrams
    • Also known as cause-and-effect or Ishikawa diagrams, these show the relationships between an outcome and the root cause(s) that enabled such an effect to occur.
  • Use Modeling Types together or Seperately:
    • For example, use DFDs to describe relationships between objects, and use sequence diagrams to describe ordering of operations.
  • Data flow diagrams
    • Often result in multiple drawings, each of which indicate a layer or level of abstraction. The top layer is sometimes referred to as the context layer, or layer 0 or simply L0, and contains the system from a high-level view and its interactions with external entities such as remote systems or users. Subsequent layers, referred to as L1, L2, and so on, drill down into more detail on individual system components and interactions, until the intended level of detail has been reached or no additional value can be achieved from further decomposition of system elements.
  • Data Flow Diagrams
    • When modeling a system to perform security analysis, experts identified DFDs as a useful way to visually describe a system. DFDs were developed with a symbology that expresses the complexity of systems.
    • While there is no formal standard that defines the shapes used when modeling the data flow of a system, many drawing packages use a convention to associate shapes and their meaning and use.
  • Data Flow Diagrams
    • When constructing DFDs, we find it useful to highlight particular architectural elements alongside the data flows. This additional information can be helpful when trying to make accurate decisions while analyzing the model for security concerns, or while using the model to educate people new to the project
  • DFD Components:
    • An element is a standard shape that represents a process or operating unit within the system under consideration. You should always label your element, so it can be referred to easily in the future. Elements are the source and/or target for data flows to and from other units in the model. To identify human actors, use the actor symbol.
  • Example of a Threat Model:
  • DFD Components:
    The following is a list of potential information that you might want to capture in annotations for objects in the model:
    • Name of the unit. If it is an executable, what is it called when built or installed on a disk?
    • Who owns it within your organization (the development team, usually)?
    • If this is a process, at what privilege level is it running (e.g., always root, or setuid’d, or some nonprivileged user)?
  • DFD Components:
    The following is a list of potential information that you might want to capture in annotations for objects in the model:
    • If this is a binary object, is it expected to be digitally signed, and if so, by what method and/or which certificate or key?
    • What programming language(s) are used for the element?
    • For managed or interpreted code, what runtime or bytecode processor is in use?
  • Coding Language:
    • People often overlook the influence of their choice of programming language. For example, C and C++ are more prone to memory-based errors than an interpreted language, and a script will more easily lend itself to reverse engineering than a (possibly obfuscated) binary. These are important characteristics that you should understand during the design of the system, especially if you identify them while you are threat modeling, to avoid common and serious security concerns.
  • Coding Language:
    • If you don’t know this information early enough in the system development process to include it in the threat model (because, as you may know by now, threat modeling should be done early and often), this is a perfect example of why threat models should be kept up to date as the system evolves