AI-GP_Q&A

Cards (116)

  • AI and machine learning are related, but not the same thing. Machine learning is a technique for achieving AI. It is the practice of using algorithms to review data, learn from it, and then make predictions or decisions rather than being explicitly programmed to perform a particular task. AI refers to machines that perform tasks ordinarily requiring human intelligence. In simple terms, AI can be thought of as the result (machines exhibiting intelligence), and machine learning as a process by which that result can be achieved (teaching the machine).
  • One of the potential negative impacts of AI use on economic opportunity is that job opportunities may fail to reach key demographic groups due to the use of AI-driven tools for job marketing and hiring.
  • The FIPs and OECD Guidelines are primarily focused on data collection, use, protection and associated individual rights relative to personal data, there have been many follow-on sets of principles to apply them in various contexts, such as AI governance.
  • User interviews and market research are two methods used to help identify a business problem that can be solved using AI.
  • Differential privacy
    Technique that protects information about training data from being revealed by "blurring" data points using an algorithm to generate values that remain meaningful yet nonspecific
  • Once deployed, AI systems require continuous monitoring and maintenance to ensure the model adapts to changes in the environment, especially changes in data.
  • Data wrangling
    Process of taking raw data and transforming it into a useful format
  • Data collected by means of AI systems raises issues like informed consent being freely given, the ability to opt out and limiting data collection.
  • Organizations must develop policies and processes to assess risk levels and then allocate their resources accordingly; i.e., focusing resources on high- and medium-risk rated AI.
  • Examples of how organizational management can demonstrate AI risk management and oversight
    • Support AI risk management roles at all levels of the organization
    • Ensure appropriate authority and resources to perform risk management are allocated throughout the organization
    • Determine and document roles, responsibilities and delegation of authorities to personnel involved in the design, development, deployment, assessment and monitoring of the AI
    • Ensure AI solutions provide sufficient information to assist in making informed decisions and document accordingly
    • Allocate roles, responsibilities and authority to relevant stakeholders
  • Role of risk assessments in AI governance
    Risk assessments help identify which AI systems (or parts of AI systems) need additional governance measures. This is achieved by assessing the severity of the harm and multiplying it by the probability of the risk occurring, which will give the level of risk (i.e. high, medium, low). Methodology: severity of harm x probability of occurrence = risk
  • Leveraging existing compliance processes will help improve adoption and minimize duplication of existing processes.
  • When starting to build an AI governance program, practitioners should start slowly and build out, leveraging existing structures.
  • The most important aspect of establishing a practical and responsible AI governance program is understanding organizational structure and culture.
  • It is important to document stakeholder decisions, senior-level approvals, prioritization and appropriate uses of the algorithm to limit potential misuse and liability.
  • An AI governance team should document its conversations to define how often to monitor the AI, document appropriate uses of the algorithm, document risks and mitigations, maintain snapshots of an algorithm to revert to a previous iteration and document how the AI interacts with other systems.
  • Your company could be subject to U.S. copyright claims for models purchased from a third party.
  • There are laws that currently directly apply to AI technology.
  • An AI conformity assessment is not only required if the AI application processes personal data.
  • Only AI systems that are designated as high-risk under the EU AI Act fall within the scope of the updated EU Product Liability Directive.
  • Ways the EU's AI Liability Directive will make it easier for victims to receive compensation for AI-induced damages
    • Requiring companies to build AI systems which are transparent and explainable by design
    • Requiring companies to disclose technical documentation about their high-risk AI systems in legal proceedings
    • Requiring companies to maintain documentation about their high-risk AI systems' safety controls, testing and monitoring
    • Requiring companies to register their high-risk AI systems in a public database
  • U.S. product liability law classifies AI systems as a product.
  • Providers of high-risk AI systems may not process special category personal data (e.g., ethnicity, religion, etc.) for the purposes of monitoring, detecting and correcting bias.
  • The European Union has chosen a "rights-based" rather than a "risk-based" approach to AI governance.
  • commended systems used by social media platforms that amplify misinformation
  • Facial recognition systems used by law enforcement authorities in public in "real-time," without judicial authorization
  • High-risk AI system under the EU AI Act

    Social credit scoring systems that are used to evaluate how trustworthy people are
  • High-risk AI system under the EU AI Act

    Deepfake videos used during election campaigns
  • High-risk AI system under the EU AI Act

    AI systems used to establish priority in the dispatching of emergency services
  • High-risk AI system under the EU AI Act

    AI used in the early stage of pharmaceutical drug discovery
  • Providers of high-risk AI systems may process special category personal data (e.g., ethnicity, religion, etc.) for the purposes of monitoring, detecting and correcting bias
  • Rights-based approach to AI governance

    The European Union
  • Risk-based approach to AI governance
    Singapore, Canada, China
  • Key area with general support among global approaches to AI governance is focusing on human-centric protections
  • Singapore's AI Verify toolkit and framework for AI governance allows individual systems to demonstrate compliance with their own claims of performance metrics
  • Canada's definition of "high-impact systems" does not reflect the degree of transparency demonstrated by the system
  • Four risk categories proposed by the European Union's AI Act
    Low risk, minimal risk, unacceptable risk, acceptable risk
  • Explainable AI (XAI)
    Approach to AI that focuses on making AI systems' decision-making processes more transparent and understandable to human users
  • Confusion Matrix
    Tool used to evaluate the performance of a classification model
  • Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems (HUDERIA)

    Framework developed by the Council of Europe to assess the impact of AI systems on human rights, democracy, and the rule of law