Test and validate the AI system during deployment: Continuous evaluation
1. Use cases may need differing amounts of detail, and some may require more security or privacy, depending on the purpose of the algorithm
2. Include cases the AI has not previously seen; i.e., "edge" cases
3. Include "unseen" data (data not part of the training data set)
4. Include potentially malicious data in the test
5. Conduct repeatability assessments to ensure the AI produces the same (or a similar) outcome consistently
6. Conduct adversarial testing and threat modeling to identify security threats
7. Establish multiple layers of mitigation to stop system errors or failures at different levels or modules of the AI system
8. Understand trade-offs among mitigation strategies