The adversary is trying to maintain their foothold via machine learning artifacts or software.
Persistence consists of techniques that adversaries use to keep access to systems across restarts, changed credentials, and other interruptions that could cut off their access. Techniques used for persistence often involve leaving behind modified ML artifacts such as poisoned training data or backdoored ML models.
Poison Training Data
Adversaries may attempt to poison datasets used by a ML model by modifying the underlying data or its labels. This allows the adversary to embed vulnerabilities in ML models trained on the data that may not be easily detectable. Data poisoning attacks may or may not require modifying the labels. The embedded vulnerability is activated at a later time by data samples with an Insert Backdoor Trigger
Poisoned data can be introduced via ML Supply Chain Compromise or the data may be poisoned after the adversary gains Initial Access to the system.
Backdoor ML Model
Adversaries may introduce a backdoor into a ML model. A backdoored model operates performs as expected under typical conditions, but will produce the adversary's desired output when a trigger is introduced to the input data. A backdoored model provides the adversary with a persistent artifact on the victim system. The embedded vulnerability is typically activated at a later time by data samples with an Insert Backdoor Trigger
Backdoor ML Model: Poison ML Model
Adversaries may introduce a backdoor by training the model poisoned data, or by interfering with its training process. The model learns to associate a adversary defined trigger with the adversary's desired output.
Backdoor ML Model: Inject Payload
Adversaries may introduce a backdoor into a model by injecting a payload into the model file. The payload detects the presence of the trigger and bypasses the model, instead producing the adversary's desired output.
Direct and Indirect LLM Prompt Injection may be used to keep persistence