Is the process of analyzing a system to look for weaknesses that come from less-desirable design choices. The goal of the activity is to identify these weaknesses before they are baked into the system (as a result of implementation or deployment) so you can take corrective action as early as possible.
The activity of threat modeling is a conceptual exercise that aims to help you understand which characteristics of a system’s design should be modified to reduce risk in the system to an acceptable level for its owners, users, and operators.
Performing Threat Modeling:
When performing threat modeling, you look at a system as a collection of its components and their interactions with the world outside the system (like other systems it interacts with) and the actors that may perform actions on these systems.
Then you try to imagine how these components and interactions may fail or be made to fail.
From this process, you’ll identify threats to the system, which will in turn lead to changes and modifications to the system. The result is a system that can resist the threats you imagined.
Threat Modeling:
Is a cyclic activity. It starts with a clear objective, continues with analysis and actions, and then it repeats.
Threat modeling is a logical, intellectual process that will be most effective if you involve most, if not all, of your team. It will generate discussion and create clarity of your design and execution.
Threat Modeling:
The first rule of threat modeling might be the old maxim garbage in, garbage out (GIGO). If you make threat modeling part of your team’s toolbox and get everyone to participate in a positive manner, you will reap its many benefits, but if you enter into it half-heartedly, without a complete understanding of its strengths and shortcomings or as a compliance “check the box” item, you’ll see it only as a time sink.
Once you find a methodology that works for you and your team and put in the effort needed to make it work, your overall security posture will grow substantially.
Why You Need Threat Modeling:
You need threat modeling because it will make your work easier and better in the long term. It will lead to cleaner architectures, well-defined trust boundaries
Also important is to point out why you don’t need threat modeling. It is not going to solve all your security problems by itself; it will also not transform your team into security experts immediately. Most of all, you don’t need it for compliance.
Threat Modeling in the System Development Life Cycle:
Threat modeling is an activity performed during the system development life cycle that is critical to the security of the system. If threat modeling is not performed in some fashion, security faults will likely be introduced through design choices that are possibly easily exploited, and most definitely will be hard to fix later
Threat Modeling In Design:
So, threat modeling by its nature looks at a design, and tries to identify security flaws. For example, if your analysis shows that a certain mode of access uses a hardcoded password, it gets identified as a finding to be addressed. If a finding goes unaddressed, you are probably dealing with an issue that will be exploited later in the life of the system. This is also known as a vulnerability, which has a probability of exploitation and an associated cost if exploited.
The key objective of threat modeling is to identify flaws so they become findings (issues that you can address) and not vulnerabilities (issues that can be exploited). You can then apply mitigations that reduce both the probability of exploitation and the cost of being exploited (that is, the damage, or impact).
Findings:
Once you identify a finding, you move to mitigate, or rectify, it. You do this by applying appropriate controls; for example, you might create a dynamic, user-defined password instead of a hardcoded one. Or, if you might run multiple tests against that password to ensure its strength. Or you might let the user decide on a password policy. Or, you might change your approach altogether and entirely remove the flaw by removing the password use and offer support for WebAuthn10 instead. In some cases, you might just assume the risk.
Accepting Risk:
In those cases, you need to document the finding, identify and describe the rationale for not addressing it, and make that part of your threat model.
Iterative Threat Modeling:
You may not find all the flaws in your system the first time it is analyzed. For example, perhaps you didn’t have the appropriate resources or the correct stakeholders examining the system. But having an initial threat model is much better than having no threat model at all. And the next iteration, when the threat model is updated, will be better, identify other flaws, and carry a higher level of assurance that no flaws were found.
Documentation:
A well-documented threat model is a great vehicle to provide new team members with this formal and proprietary knowledge. Many obscure data points, justifications, and general thought processes (e.g., “Why did you folks do it like this here?!”) are well suited for being captured as documentation in a threat model. Any decisions made to overcome constraints, and their resulting security impacts, are also good candidates for documentation.
Basic Concepts and Terminology:
A system contains assets—functionality its users depend upon, and dataaccepted, stored, manipulated, or transmitted by the system. The system’s functionality may contain defects, which are also known as weaknesses. If these weaknesses are exploitable, meaning if they are vulnerable to external influence, they are known as vulnerabilities, and exploitation of them may put the operations and data of the system at risk of exposure.
Basic Concepts and Terminology:
An actor (an individual or a process external to the system) may have malicious intent and may try to exploit a vulnerability, if the conditions exist to make that possible; some skilled attackers are capable of altering conditions to create opportunities to attempt exploitation. An actor creates a threat event in this case, and through this event threatens the system with a particular effect (such as stealing data or causing functionality to misbehave).
Basic Concepts and Terminology:
The combination of functionality and data creates value in the system, and an adversary causing a threat negates that value, which forms the basis for risk. Risk is offset by controls, which cover functional capabilities of a system as well as operational and organizational behaviors of the teams that design and build the system, and is modified by probabilities—the expectations of an attacker wishing to cause harm and the likelihood they will be successful should they attempt to do so.
Weakness
A weakness is an underlying defect that modifies behavior or functionality (resulting in incorrect behavior) or allows unverified or incorrect access to data.
Weaknesses in system design result from failure to follow best practices, or standards, or convention, and lead to some undesirable effect on the system.
Luckily for threat modelers (and development teams), a community initiative—Common Weakness Enumeration (CWE)—has created an open taxonomy of security weaknesses that can be referenced when investigating system design for concerns.
Exploitability
Exploitability is a measure of how easily an attacker can make use of a weakness to cause harm. Put another way, exploitability is the amount of exposure that the weakness has to external influence
Vulnerability
When a weakness is exploitable (exploitability outside the local authorization context is nonzero),it is known as a vulnerability. Vulnerabilities provide a means for an adversary with malicious intent to cause some sort of damage to a system. Vulnerabilities that exist in a system but that are previously undiscovered are known as zero-day vulnerabilities.
Vulnerability
Zero days are no more or less dangerous than other vulnerabilities of a similar nature but are special because they are likely to be unresolved, and therefore the potential for exploitation may be elevated. As with weaknesses, community efforts have created a taxonomy of vulnerabilities, encoded in the CVE database.
Severity
Weaknesses lead to an impact on a system and its assets (functionality and/or data); the damage potential and “blast radius” from such an issue is described as the defect’s severity.
For those whose primary profession is or has been in any field of engineering, severity may be a familiar term. Vulnerabilities—exploitable weaknesses—are by definition at least as severe as the underlying defect, and more often the severity of a defect is increased because it is open to being exploited.
Impact
If a weakness or vulnerability is exploited, it will result in some sort of impact to the system, such as breaking functionality or exposing data. When rating the severity of an issue, you will want to assess the level of impact as a measure of potential loss of functionality and/or data as the result of successful exploitation.
Actor
When describing a system, an actor is any individual associated with the system, such as a user or an attacker. An actor with malicious intent, either internal or external to the organization, creating or using the system, is sometimes referred to as an adversary.
Threat
A threat is the result of a nonzero probability of an attacker taking advantage of a vulnerability to negatively impact the system in a particular way (commonly phrased in terms of “threat to...” or “threat of...”).
Threat event
When an adversary makes an attempt (successful or not) to exploit a vulnerability with an intended objective or outcome, this becomes known as a threat event.
Loss
Loss occurs when one (or more) impacts affect functionality and/or data as a result of an adversary causing a threat event:
The actor is able to subvert the confidentiality of a system’s data to reveal sensitive or private information. T
he actor can modify the interface to functionality, change the behavior of functionality, or change the contents or provenance of data.
Loss
Loss occurs when one (or more) impacts affect functionality and/or data as a result of an adversary causing a threat event:
The actor can prevent authorized entities from accessing functionality or data, either temporarily or permanently. Loss is described in terms of an asset or an amount of value.
Risk:
Risk combines the value of the potentially exploited target with the likelihood an impact may be realized. Value is relative to the system or information owner, as well as to the attacker. You should use risk to inform priority of an issue, and to decide whether to address the issue. Severe vulnerabilities that are easy to exploit, and those that lead to significant damages through loss, should be given a high priority to mitigate.
Severity (the amount of damage that can be caused by successful exploitation of a vulnerability), and risk (the combination of the likelihood of initiation of a threat event and the likelihood of success to generate a negative impact as a result of exploitation)—can be determined formulaically. Many methods exist today for determining severity or risk, and some threat modeling methodologies use alternative riskscoring methods (not described in this book). A sample of three popular methods in general use (one for measuring severity, two for risk)
CVSS (severity):
The Common Vulnerability Scoring System (CVSS) is now in version 3.1, and is a product of the Forum of Incident Response and Security Teams (FIRST).
CVSS is a method for establishing a value from 0.0 to 10.0, that allows you to identify the components of severity. The calculation is based upon the likelihood of a successful exploitation of a vulnerability, and a measurement of potential impact (or damage). Eight metrics, or values, are set in the calculator to derive a severity rating:
CVSS:
Likelihood of success is measured on specific metrics that are given a numeric rating. This results in a value known as the exploitability subscore. Impact is measured similarly (with different metrics) and is known as the impact subscore. Added together, the two subscores result in an overall base score.
Remember, CVSS does not measure risk but severity. CVSS can tell you the likelihood that an attacker will succeed in exploiting the vulnerability of an impacted system, and the amount of damage they can do. But it cannot indicate when or if an attacker will attempt to exploit the vulnerability. Nor can it tell you how much the impacted resource is worth or how expensive it will be to address the vulnerability. It is the likelihood of the initiation of an attack, the value of the system or functionality, and the cost to mitigate it that drives the risk calculation.
DREAD (risk):
Damage: If an adversary conducted an attack, how much destruction could they cause?
Reproducibility: Is a potential attack easily reproduced (in method and effect)?
Exploitability: How easy is conducting a successful attack?
Affected users: What percentage of the user population might be impacted?
Discoverability: If the adversary does not already know of the potential for an attack, what is the likelihood they can discover it?
DREAD
Is a process for documenting characteristics of a potential for an attack against a system (via a vector by an adversary) and coming up with a value that can be compared to other such values for other attack scenarios and/or threat vectors. The risk score for any given attack scenario (combination of a security vulnerability and adversary) is calculated by considering the characteristics of exploitation of a vulnerability by an attacker and assigning them a score in each dimension (i.e., D, R, E, A, D), for low-, medium-, and highimpact issues, respectively.
DREAD Scoring:
The total of the scores for each dimension determine the overall risk value. For example, an arbitrary security issue in a particular system may have scores of [D = 3, R = 1, E = 1, A = 3, D = 2] for a total risk value of 10. To have meaning, this risk value can be compared to other risks that are identified against this particular system; it is less useful to attempt to compare this value with values from other systems!
FAIR Method for risk quantification (risk):
The Factor Analysis of Information Risk (FAIR) method is gaining popularity because it offers the right level of granularity with more specificity to enable more effective decision making.
DREAD is an example of a qualitative risk calculation. FAIR is an international standard for quantitative risk modeling and for understanding the impact to assets from threats using measurements of value (hard and soft currency costs) and probability of realization (occurrences, or threat events) of a threat by an actor
WARNING:
FAIR is thorough and accurate, but also complex, and requires specialized knowledge to perform the calculations and simulations correctly. This is not something you want to do live in a threat modeling review session, nor something you want to hoist on your security subject matter experts (SMEs), if you have them. Security experts have expertise in finding weaknesses and threats, not modeling financial impact valuations. Hiring individuals with skills in computational methods and financial modeling, or finding a tool that does the hard math for you!
Core Properties: Privacy
While confidentiality refers to the controlled access to private information shared with others, privacy refers to the right of not having that information exposed to unauthorized third parties. Many times when people talk about confidentiality, they really expect privacy, but although the terms are often used interchangeably, they are not the same concept. You could argue that confidentiality is a prerequisite to privacy. For example, if a system cannot guarantee the confidentiality of the data it stores, that system can never provide privacy to its users.
Core Properties: Safety
Safety is “freedom from unacceptable risk of physical injury or of damage to the health of people, either directly, or indirectly as a result of damage to property or to the Environment.” Naturally, for something to meet safety requirements, it has to operate in a predictable manner. This means that it must at least maintain the security properties of integrity and availability.
Identification:
Actors in a system must be assigned a unique identifier meaningful to the system. Identifiers should also be meaningful to the individuals or processes that will consume the identity (e.g., the authentication subsystem). An actor is anything in a system (including human users, system accounts, and processes) that has influence over the system and its functions, or that wishes to gain access to the system’s data.
Identification:
To support many security objectives, an actor must be granted an identity before it can operate on that system. This identity must come with information that allows a system to positively identify the actor—or in other words, to allow the actor to show proof of identity to the system. In some public systems, unnamed actors or users are also identified, indicating that their specific identity is not important but is still represented in the system.