Module-4

Cards (41)

  • Interoperability of AI risk management
    • Review existing risk management programs to ensure the new AI risk strategies can be incorporated
    • Understand what other risk management programs the organization uses and what harms or risks those programs are meant to mitigate
    • Determine if the planned AI use increases existing risk or introduces new risks that those programs must adjust to address
    • Create new risk management processes to address unique AI risks
    • Evaluate where risks intersect and verify programs function together to ensure efficiency and protection against potential harms
  • Risks that AI algorithms and models pose
    • Security and operational risk
    • Privacy risk
    • Business risk
  • Security and operational risks from generative AI
    • Hallucinations
    • Deepfakes
    • Training data poisoning
    • Data leakage
    • Filter bubbles/echo chambers
  • General security risks from AI
    • Concentration of power to a few individuals or organizations, leading to the erosion of individual freedom
    • Overreliance on AI leads to a false sense of security
    • Vulnerable to adversarial machine learning attacks
    • Misuse of AI
    • Transfer learning
  • Operational risks of running an AI algorithm
    • High costs (hardware, storage, skilled professionals)
    • Environmental impact (increased carbon footprint, resource utilization)
    • Data corruption and poisoning
  • Privacy risks
    • Data persistence
    • Data repurposing
    • Spillover data
    • Challenges with informed consent, opt-out, data collection limits, data deletion
  • Threats of generative AI
    • Threat to democracy
    • Misuse of pattern analysis
    • Profiling/tracking
    • Overreliance on predictive analytics
  • Business risks to the organization
    • Bias and discrimination
    • Job displacement
    • Dependence on AI vendors
    • Vagueness around liability accountability
    • Lack of transparency
    • Intellectual property infringement
  • Regulation and legal risks

    • Compliance with laws and regulations
    • Liability for harm caused by the AI system
    • Intellectual property disputes
    • Human rights violations
    • Reputational damage
    • Socioeconomic inequality
    • Social manipulation
    • Opaque decision-making
    • Lack of human oversight
  • Businesses are racing to be the first in the marketplace, but this can result in the release of unethical, unresponsive and potentially malicious AI systems into the world
  • Our biases, morals and ethical values are mirrored in the AI systems we develop, which can affect AI decision-making and have significant consequences for the data subject
  • Aligning AI risk management strategies
    • Incorporate AI into existing risk management strategies (security/operational risk, privacy risk, business risk)
    • Or have a holistic AI risk management strategy
  • Harms taxonomy
    A list of negative consequences that could befall the data subject or organization if certain pieces of information are leaked or misused
  • Approaches to identifying privacy harms
    • Panopticon: Privacy harm must involve the literal unwanted sensing of visual or other information by a human being
    • Ryan Calo: Subjective privacy harms (sense of being internal to the person being harmed) and Objective privacy harms (tangible adverse consequences)
    • Daniel Solove: Taxonomy of privacy harms (information collection, information processing, information dissemination, invasion)
  • Privacy harms taxonomy
    A framework for identifying and understanding the consequences of privacy rights infringements - for individuals and society as a whole
  • Privacy harms taxonomy
    • Enhances empathy for data subjects - customers and people from whom personal data is collected
    • Once harms are broken down, organizations can perform targeted, controlled selection to drive down a specific type of risk (security, privacy, business)
  • Approaches to identifying privacy harms
    • Panopticon
    • Ryan Calo's definition of privacy harm
    • Citron and Solove's taxonomy
  • Panopticon definition of privacy harm

    Must involve the literal unwanted sensing of visual or other information by a human being; i.e., data is leaked and a human being has actively encountered that data
  • Ryan Calo's privacy harms
    • Subjective privacy harms: Sense of being internal to the person being harmed, flows from the perception of unwanted observation, originates from the right to be left alone, can be acute or ongoing, can apply to one or many individuals
    • Objective privacy harms: Sense of being external to the person being harmed, comes from outside the individual, not the feeling of being watched, involves the forced or unanticipated use of information about a person against that person, can occur when personal information is used to justify an adverse action against a person (e.g., not getting a loan)
  • Citron and Solove's privacy harm categories
    • Physical
    • Reputational
    • Relationship
    • Economic
    • Discrimination
    • Psychological, including emotional distress and disturbance
    • Autonomy, including coercion, manipulation, failure to inform, thwarted expectations, not meeting social norms, lack of control, chilling effects
  • As with any evolving technology, AI governance practitioners must balance the benefits of AI use with the potential harms and risks to the users. Risk management strategies must evolve to include AI and new procedures may be necessary to address the unique risks AI poses. By examining new AI technologies carefully to determine the areas of vulnerability and mapping them to new areas of potential harm, an organization can create necessary strategies to mitigate risks and incorporate necessary changes to existing risk management strategies.
  • Principles of AI risk management
    • Adopt a pro-innovation mindset
    • Ensure planning and design is consensus-driven
    • Ensure team is outcome-focused
    • Ensure the framework is law-, industry- and technology-agnostic
    • Adopt a non-prescriptive approach to allow for intelligent self-management
    • Ensure governance is risk-centric
    • Create policies to manage third-party risk, to ensure end-to-end accountability
  • Pro-innovation mindset
    Does not refer to "innovation for the sake of innovation", be prepared for changes, new products and possibilities, will the new product fill a gap or meet a need, does it align with principles, is it fiscally responsible
  • Consensus-driven planning and design
    Does not refer to "best two out of three", have you involved all stakeholders, did you include people from various teams across the organization, does each stakeholder understand the needs vs risks
  • Outcome-focused team
    Does not refer to only the "bottom line", does the team understand what the desired outcome is from the AI product, will the product serve the purpose for which it is being created, utilized, designed or applied, is there a better way to achieve the goal than that which is being proposed
  • Law-, industry- and technology-agnostic framework

    Framework should be interoperable across systems and not biased toward a specific law, industry or technology, it should not be biased toward a specific business process or practice, ideally, it should solve a business problem, explain why an approach was taken and be flexible
  • Non-prescriptive approach
    Risks should be approached in a context-specific and use-case manner to allow for adjustment and evolution as needs and uses change
  • Risk-centric governance
    Have you considered the risk factors and aligned your governance accordingly
  • Policies to manage third-party risk
    Identify the purpose for AI and ensure the program meets the need, determine who needs access to the AI programs and specify use, identify the risks from the specific program and use, and work to mitigate those risks, be clear on who owns AI process output, especially once the contract or use is complete/terminated
  • Risk assessment is critical to the successful governance of AI systems, but is context-specific as to the owner and operator, specific industry and use case, potential social impacts, timing and use of AI, and jurisdictional controls.
  • Criteria for determining if the outcome of developing and using AI is appropriate
    Business purpose and planned uses of the AI, potential harms including false positives and negative predictions, descriptions of the data used to train the AI, functionality, performance metrics, third-party risks
  • The EU model classifies AI into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk.
  • For organizations developing and using AI, allocating roles, responsibilities and authority to the relevant stakeholders and providing the resources they need is essential. The NIST AI Risk Management Framework can be used as a guide.
  • Integrating AI governance principles into an organization involves an understanding of regulatory requirements, the organization's risk tolerance and technological capabilities, and an awareness of the industry standards. Because these considerations can change, risk assessments should be performed on a regular basis. Knowing the right questions to ask and who to work with will help ensure that the organizational principles are incorporated into risk management programs.
  • Establishing AI governance and strategy
    • Advocate for AI governance support from senior leadership and tech teams
    • Establish organizational risk strategy and tolerance
    • Develop central inventory of AI and ML applications and repository of algorithms
    • Develop responsible AI accountability policies and incentive structures
    • Understand AI regulatory requirements
    • Set common AI terms and taxonomy for the organization
    • Provide knowledge resources and training to the enterprise to foster a culture that continuously promotes ethical behavior
  • Responsibilities among companies that develop AI systems and those that use or deploy them differ
    • Determine if you are a developer, deployer or user
    • Establish governance processes for all parties
  • Establish and understand the roles and responsibilities of AI governance people and groups
    • Chief Privacy Officer
    • Chief Ethics Officer
    • Office for Responsible AI
    • AI Governance Committee
    • Legal advisors and department
    • Ethics Board
    • Architecture Steering Groups
    • AI Project Managers
  • Steps to involve key stakeholders
    1. Assist personnel to understand their specific roles, where to seek assistance, and how to empower themselves in the AI development and release process
    2. Communicate with research, data scientists, AI and ML engineers and non-AI engineers
    3. Decide who will maintain and update a central inventory of AI applications and a repository of algorithms
    4. Establish organizational risk, strategy and tolerance
  • Types of governance models
    • Highly centralized governance
    • Hybrid model (combination of centralized and local governance)
    • Decentralized structure (local governance)
  • AI assessment processes
    1. Use external frameworks, organizations' and academic publications
    2. Focus on key AI risks and needs based on the organization's AI principles, values and any standards developed within the organization
    3. Contrast the assessment against existing assessments, such as privacy reviews