Module-5

Cards (47)

  • Steps to take first Map, plan and scope the AI project and?
    Create a communication plan
  • What do regulators require evidence on?
    • Compliance and disclosure obligations
    • Explainability
    • Document risks and mitigation processes
    • Data and risk classifications
  • What do consumers need to see?
    • Transparency as to the functionality of AI
    • What data will be used and how
  • Map, plan and scope the AI project: Considerations
    1. Identify what data is needed for training the algorithm
    2. Determine where the data is originating and verify it is accurate
    3. Is the data fully representational of the data intended to be used for the AI process?
    4. Is the data biased?
    5. Statistical sampling can help identify data gaps
    6. Determine applicable policies
    7. Sector-specific laws; laws specific to the training data used
    8. Document appropriate uses of your AI to prevent use for a different purpose than the AI was created for
    9. Evaluate what happens if AI performs poorly
    10. Assess the organization's risk tolerance
  • In multiple scenarios, there will not be one perfect answer for developing AI when you have competing values
  • Why prioritise?
    Understand which areas your organization is going to prioritize, with consensus from the stakeholder group, and document that decision
  • What is a confusion matrix?

    • True positive
    • False positive
    • False negative
    • True negative
  • Map, plan and scope the AI project: Identify risks and mitigations
    1. Probability and severity harms matrix
    2. HUDERIA risk index number
    3. Risk mitigation hierarchy
    4. Confusion matrix
    5. A four-square matrix that includes risks and weighted outcomes
  • Once you have identified these risks and potential mitigations, one of the most critical things you can do is communicate them
  • It is important that your stakeholder group also works together to identify risks and mitigations in your algorithm and AI system. Use a repeatable process and multiple tools to identify these risks.
  • Consider performing a privacy impact assessment (PIA) on the underlying training data
  • Where possible, build off of existing data protection impact assessments (or privacy impact assessments) to create an algorithmic impact assessment
  • An algorithmic impact assessment should cover the data issues, and also document decisions your stakeholder group makes.
  • Why? Testing and continuous validation are vital to ensure AI products are evaluated and mitigated for security, privacy, bias and safety issues while performing as intended.
  • List the types of testing?
    • Accuracy
    • Robustness
    • Reliability
    • Privacy
    • Interpretability
    • Safety
    • Bias
  • Not every organization has the resources to evaluate every system
  • What to document? It is crucial to document the testing, its outcomes, and what you changed based on testing
  • Test and validate the AI system during deployment: Continuous evaluation
    1. Use cases may need differing amounts of detail, and some may require more security or privacy, depending on the purpose of the algorithm
    2. Include cases the AI has not previously seen; i.e., "edge" cases
    3. Include "unseen" data (data not part of the training data set)
    4. Include potentially malicious data in the test
    5. Conduct repeatability assessments to ensure the AI produces the same (or a similar) outcome consistently
    6. Conduct adversarial testing and threat modeling to identify security threats
    7. Establish multiple layers of mitigation to stop system errors or failures at different levels or modules of the AI system
    8. Understand trade-offs among mitigation strategies
  • Reviewing previous incidents can help you identify areas of risk
  • Test and validate the AI system during deployment: Documentation
    1. Create model cards or facts sheets
    2. Create counterfactual explanations
    3. Determine what level of impact requires remediation
    4. Appoint appropriate individuals or teams to address
    5. Determine method of deployment
  • Why is it important to document decisions
    It is important to document all decisions that your stakeholder group makes during the development life cycle of an algorithm, whether they address regulatory requirements or not
  • Why use Standard documents and templates?
    • Help your stakeholder group evaluate and document decisions along the way
  • What documents should you create?
    • Model cards
    • Fact sheets
    • Counterfactual explanations
    • Details on what new or different input may affect the output of the AI process
  • Explain how to remediate adverse AI related impacts?
    1. Determine what level of impact requires remediation
    2. Appoint appropriate individuals or teams to address
  • How to deploy your AI?
    • Determine what platform will be used (cloud, onsite, hybrid)
    • Determine if infrastructure will support deployment
  • Thorough testing and continuous validation are essential for ensuring the reliability, security and performance of AI systems
  • Understanding the associated risks and tailoring testing approaches accordingly is crucial, taking into account factors such as algorithm type, third-party tools, regulations, industry-specific considerations, and the AI's intended purpose
  • Compliance with regulatory standards and leveraging responsible AI services further enhances development, testing and documentation to promote responsible and ethical AI practices while mitigating potential risks
  • To govern AI effectively, it is crucial to understand its purpose and associated risks
  • If the AI has a significant impact, regular monitoring and fine-tuning are necessary
  • AI inventory with risk scores?
    Helps organizations review and allocate resources appropriately
  • Monitoring for security risks should align with existing protocols and standards, though emerging concerns like data poisoning require evolving best practices
  • Using AI for new purposes or incorporating new datasets introduces new risks, highlighting the need for documentation and periodic snapshots of the algorithm
  • Define the elements of an Incident response plan?
    1. Identification
    2. Reporting
    3. Mitigation
    4. Communication
  • Human control over shutting down underperforming systems is important and may be mandated by legal standards
  • Continuously improve the system by retraining with new data as needed and with human input and feedback
  • It is important to also understand what your organization's security protocols are and what industry-specific standards apply
  • One drawback of just using existing security protocols is that they often are not AI-specific
  • Some AI-specific risks that your organization might need to consider include model inversion, extraction, poisoning and evasion
  • Using AI for a new purpose it was not originally modeled for
    Documentation is needed