Module-2

Cards (82)

  • The potential harms posed by AI are significant and may impact individuals, groups, society, organizations and the environment
  • Potential risks can be overlooked and inadvertently created through the development and use of AI
  • Every organization should have a baseline AI ethics code and processes in place to identify, assess and mitigate the potential harms of AI use, procurement, development and deployment
  • Before implementing AI in an organization, AI governance professionals must understand the potential reputational, cultural, economic, acceleration, legal and regulatory risks and harms
  • Machine learning and AI pose risks already well understood in existing sectors and practices, but the risks can be exacerbated by the scale, scope and speed of processing ML and AI
  • Given the fact that ML and AI continue to learn and evolve, it can be difficult to anticipate what form risks may take, particularly if they are novel risks
  • It is essential to apply AI principles and ethics to the development and testing of ML and AI to mitigate potential harms
  • Who is affected by core risks and harms posed by AI systems?
    • Individuals
    • Groups
    • Society
    • Companies/Institutions
    • Ecosystems
  • What is the concern with Bias in AI systems?
    • Can cause harm to a person's civil liberties, rights, safety and economic opportunity
    • Individuals developing the systems can have bias; this should be addressed during the life cycle of AI system development
  • Types of bias in AI systems
    • Implicit bias
    • Sampling bias
    • Temporal bias
    • Overfitting to training data
    • Edge cases and outliers
    • Noise
    • Outliers
  • Implicit bias
    Discrimination or prejudice toward a particular group or individual
  • Sampling bias
    Data gets skewed toward a subset of a group and therefore may favor that subset over a larger group
  • Temporal bias
    A model is trained and functions properly at the time, but may not work well at a future point, requiring new ways to address the data
  • Overfitting to training data

    A model works for the training data, but does not work for new data because it is so fitted to the training data
  • Edge cases and outliers
    Any data outside the boundaries of the training dataset (e.g., edge cases can be errors when you have data that is incorrect, duplicative or unnecessary
  • Noise
    Data that negatively impacts the machine learning of the model
  • Outliers
    Data points outside the normal distribution of the data; can impact how the model operates and its effectiveness
  • List the Individual harms from bias and discrimination in AI systems?
    • Employment and hiring discrimination
    • Insurance and social benefit discrimination
    • Housing discrimination
    • Education discrimination
    • Credit discrimination
    • Differential pricing of goods and services
  • Privacy concerns with AI systems
    • Personal data used as part of AI training data
    • Appropriation of personal data for model training
    • Inference: An AI system model that makes predictions or decisions
    • Lack of transparency of use
    • Inaccurate models
  • Economic opportunity and job loss from AI
    • AI can help create some job opportunities but also has potential to affect job loss
    • AI being used to conduct jobs previously handled by humans
    • AI-driven discriminatory hiring practices
  • List the Group harms from AI systems
    • Facial recognition algorithms
    • Mass surveillance
    • Civil rights
  • Societal harms from AI systems
    • Spread of disinformation
    • Ideological bubbles or echo chambers
    • Deepfakes
    • Safety
  • Company/institutional harms from AI systems
    • Reputational
    • Cultural
    • Economic
    • Acceleration
    • Legal and regulatory
  • Ecosystem harms from AI systems include high energy consumption and carbon emissions during training
  • AI can also be used to help the environment, such as in self-driving cars, agriculture, disaster response, and weather forecasting
  • Identifying potential harms and eradicating or mitigating them are essential for AI and machine learning (ML) use
  • Failure to identify and address harms can have a catastrophic impact on an organization, whether reputational, cultural, economic, acceleration or legal and regulatory
  • Identifying and managing risks of harm is critical to the evolution and development in trust for AI at the organizational level and beyond
  • What are the Characteristics of trustworthy AI systems?
    • Human-centric
    • Accountable
    • Transparent
  • Human-centric AI
    AI should amplify human agency; should have a positive, not a negative, impact on the human condition
  • Accountable AI

    Organizations ultimately need to be responsible for the AI they deliver, irrespective of the number of contributors
  • Transparent AI
    AI must be understandable to the intended audience (e.g., technical, legal, the user)
  • AI can produce a huge number of potential opportunities, such as being faster and more accurate in its results across a broader range of data, and being incredibly accurate in medical assessments
  • Trustworthy AI

    • Operates in an expected, legal and fair manner
    • Human-centric
    • Accountable
    • Transparent
  • AI should amplify human agency; should have a positive, not a negative, impact on the human condition
  • Organizations ultimately need to be responsible for the AI they deliver, irrespective of the number of contributors
  • AI must be understandable to the intended audience (e.g., technical, legal, the user)
  • AI can be faster and more accurate in its results across a broader range of data
  • AI in the use of medical assessments can be incredibly accurate, more so than humans, particularly when evaluating scans and other medical outcomes
  • AI can also help with legal predictions, and can review case law, issues and regulations far more broadly, quickly and accurately than humans