Reconnaisance

Cards (14)

  • Search for Victim's Publicly Available Research Materials
    • Adversaries may search publicly available research to learn how and where machine learning is used within a victim organization. The adversary can use this information to identify targets for attack, or to tailor an existing attack to make it more effective.
    • Research materials may exist as academic papers published in Journals and Conference Proceedings, or stored in Pre-Print Repositories, as well as Technical Blogs.
  • Search for Victim's Publicly Available Research Materials:
    • Organizations often use open source model architectures trained on additional proprietary data in production. Knowledge of this underlying architecture allows the adversary to craft more realistic proxy models (Create Proxy ML Model). An adversary can search these resources for publications for authors employed at the victim organization.
  • Search for Victim's Publicly Available Research Materials: Journals and Conference Proceedings
    • Many of the publications accepted at premier machine learning conferences and journals come from commercial labs. Some journals and conferences are open access, others may require paying for access or a membership. These publications will often describe in detail all aspects of a particular approach for reproducibility. This information can be used by adversaries to implement the paper.
  • Search for Victim's Publicly Available Research Materials: Pre-Print Repositories
    • Pre-Print repositories, such as arXiv, contain the latest academic research papers that haven't been peer reviewed. They may contain research notes, or technical reports that aren't typically published in journals or conference proceedings. Pre-print repositories also serve as a central location to share papers that have been accepted to journals. Searching pre-print repositories provide adversaries with a relatively up-to-date view of what researchers in the victim organization are working on.
  • Search for Victim's Publicly Available Research Materials: Technical Blogs
    • Research labs at academic institutions and Company R&D divisions often have blogs that highlight their use of machine learning and its application to the organizations unique problems. Individual researchers also frequently document their work in blogposts. An adversary may search for posts made by the target victim organization or its employees.
  • Search for Victim's Publicly Available Research Materials: Technical Blogs
    • In comparison to Journals and Conference Proceedings and Pre-Print Repositories this material will often contain more practical aspects of the machine learning system. This could include underlying technologies and frameworks used, and possibly some information about the API access and use case. This will help the adversary better understand how that organization is using machine learning internally and the details of their approach that could aid in tailoring an attack.
  • Search for Publicly Available Adversarial Vulnerability Analysis
    • Much like the Search for Victim's Publicly Available Research Materials, there is often ample research available on the vulnerabilities of common models. Once a target has been identified, an adversary will likely try to identify any pre-existing work that has been done for this class of models.
  • Search for Publicly Available Adversarial Vulnerability Analysis
    • This will include not only reading academic papers that may identify the particulars of a successful attack, but also identifying pre-existing implementations of those attacks. The adversary may Adversarial ML Attack Implementations or Adversarial ML Attacks their own if necessary.
  • Search Victim-Owned Websites
    • Adversaries may search websites owned by the victim for information that can be used during targeting. Victim-owned websites may contain technical details about their ML-enabled products or services. Victim-owned websites may contain a variety of details, including names of departments/divisions, physical locations, and data about key employees such as names, roles, and contact info. These sites may also have details highlighting business operations and relationships.
  • Search Victim-Owned Websites
    • Adversaries may search victim-owned websites to gather actionable information. This information may help adversaries tailor their attacks (e.g. Adversarial ML Attacks or Manual Modification). Information from these sources may reveal opportunities for other forms of reconnaissance (e.g. Search for Victim's Publicly Available Research Materials or Search for Publicly Available Adversarial Vulnerability Analysis)
  • Search Application Repositories
    • Adversaries may search open application repositories during targeting. Examples of these include Google Play, the iOS App store, the macOS App Store, and the Microsoft Store.
    • Adversaries may craft search queries seeking applications that contain a ML-enabled components. Frequently, the next step is to Acquire Public ML Artifacts.
  • Active Scanning
    • An adversary may probe or scan the victim system to gather information for targeting. This is distinct from other reconnaissance techniques that do not involve direct interaction with the victim system.
  • Reconnaissance
    • The adversary is trying to gather information about the machine learning system they can use to plan future operations.
    • Reconnaissance consists of techniques that involve adversaries actively or passively gathering information that can be used to support targeting. Such information may include details of the victim organizations machine learning capabilities and research efforts.
  • Reconnaissance
    • This information can be leveraged by the adversary to aid in other phases of the adversary lifecycle, such as using gathered information to obtain relevant ML artifacts, targeting ML capabilities used by the victim, tailoring attacks to the particular models used by the victim, or to drive and lead further Reconnaissance efforts.