Model Access

Cards (4)

  • ML Model Inference API Access
    • Adversaries may gain access to a model via legitimate access to the inference API. Inference API access can be a source of information to the adversary (Discover ML Model Ontology, Discover ML Model Family), a means of staging the attack (Verify Attack, Craft Adversarial Data), or for introducing data to the target system for Impact (Evade ML Model, Erode ML Model Integrity).
  • ML-Enabled Product or Service
    • Adversaries may use a product or service that uses machine learning under the hood to gain access to the underlying machine learning model. This type of indirect model access may reveal details of the ML model or its inferences in logs or metadata.
  • Physical Environment Access
    • In addition to the attacks that take place purely in the digital domain, adversaries may also exploit the physical environment for their attacks. If the model is interacting with data collected from the real world in some way, the adversary can influence the model through access to wherever the data is being collected. By modifying the data in the collection process, the adversary can perform modified versions of attacks designed for digital access.
  • Full ML Model Access
    • Adversaries may gain full "white-box" access to a machine learning model. This means the adversary has complete knowledge of the model architecture, its parameters, and class ontology. They may exfiltrate the model to Craft Adversarial Data and Verify Attack in an offline where it is hard to detect their behavior.