Module-6

Cards (47)

  • Oliver Patel: 'It's really important for organizations and professionals working in this [AI] space to be aware of how these laws are being reformed, what it means for them, and most importantly, to be aware of how to effectively mitigate the risks of the AI systems to make these potential cases [liability of harm AI systems may give rise to] less likely.'
  • AI technology and AI-based products
    • May not be currently under a specific regulatory framework, but they do not exist in a vacuum
    • Exist in the same legal and regulatory context other technologies navigate; can be subject to complex regulatory frameworks
    • Regulatory requirements should be accounted for throughout the AI development lifecycle
    • Ensures development of appropriate controls to address risks and regulatory requirements in applicable AI
    • Similar considerations should be made when assessing the implementation and use of AI tools in an organization
  • AI adoption generally falls within two broad categories
    • Perform an existing function in a new way
    • Accomplish a new process that has not been done or was not possible before AI
  • Perform an existing function in a new way
    • Existing regulatory requirements that would normally apply to that function continue to apply to the updated, AI-driven process
    • Using AI does not allow you to bypass or ignore applicable laws and regulations
  • Accomplish a new process that has not been done or was not possible before AI
    • Inquire if existing regulatory requirements may apply to this new process
    • Assess what laws may be in scope, what reviews may be required, what risks AI may pose and what controls can be implemented to mitigate risks and ensure compliance
    • General consumer protection and product safety rules continue to be relevant and applicable
  • Highly regulated industries: financial services, health care, transportation, employment and education
  • Copyright laws and AI
    • How do the principles and protections of copyright laws apply to AI?
    • Can the output of an AI be considered original and therefore warrant copyright protection?
    • If AIs cannot be inventors and develop patentable inventions, how much human intervention/participation is necessary to meet the threshold? Where is the line and how is it measured?
  • Other U.S. laws to be interpreted to determine how and when they apply to AI technologies
    • Employment: Title VII and EEOC regulations
    • Consumer finance: Equal Credit Opportunity Act, the Fair Credit Reporting Act
    • SR 11-7: A regulatory standard set out by the U.S. Federal Reserve Bank that gives guidance on model risk management
    • OSHA's guidelines for robotics safety and "hazard analysis"
    • The Food and Drug Administration's (FDA) systemic approval processes for software as a medical device
  • Federal Trade Commission (FTC) and other U.S. agencies: 'Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices. The Consumer Financial Protection Bureau, the Department of Justice's Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission are among the federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, consumer protection, and other vitally important legal protections.'
  • FTC's authority over AI
    • Broad authority over general commercial operations to prevent unfair or deceptive practices
    • Apply to privacy and security concerns related to programs and algorithms (will continue to apply to AI)
    • AI-specific interpretations of these standards will likely be developed and applied over time
  • The EU Digital Services Act; local intellectual property and competition laws; upcoming AI regulations like the EU AI Act
  • EU Digital Services Act
    • Overlaps the GDPR regarding transparency
    • DSA increases overall transparency related to online platforms
    • Recommender system (ML that recommends products to consumers): Online platforms should inform users on how recommender systems impact how information is displayed
    • Online advertising: recipients should be able to access information in the interface where an ad is presented, about parameters used to present the ad (logic used; whether based on profiling)
  • Product safety laws
  • Intellectual property laws and AI
  • Data protection laws and security standards
    • Privacy laws will apply to most consumer-facing AI systems, to some degree: GDPR, CCPA and other U.S. state privacy laws, biometrics laws (Illinois' Biometric Information Privacy Act), security laws and standards, breach laws, and other laws and regulations focused on the collection and use of personal data
    • Notice and consent models for personal control of one's data will be further tested by AI technologies
    • Accounting for traditional privacy principles and practices (e.g., accuracy, notice and the data subject rights of access or deletion)
    • Legal requirements and application of data subject rights is complex with AI systems that were trained on data sets that the system no longer holds or can access
    • Laws that account for issues such as automated decision-making (GDPR) were designed with an awareness of the existence and potential impact of AI, but not necessarily an in-depth understanding of it
  • Third-party relationships
    • May not hold up in the context of AI and AI-based tools:
    • Existing precedent and practice around contracting language regarding liability limits
    • Past demarcations around the liability and warranty controls based on the "handing off" concept typical of software products and contracts
  • Intersection between the GDPR and AI
    • GDPR is intended to be technology agnostic to adapt to evolving technologies over time (including AI)
    • GDPR is focused on the governing and processing of personal information
    • AI programs process information that can include personal information (but does not necessarily include it)
    • The principles of GDPR are underpinned by a series of requirements that honor data subject rights: lawfulness, fairness and transparency, purpose limitations, data minimization, accuracy, storage limitations, integrity, confidentiality and accountability
    • Key articles of the GDPR that intersect with AI: Article 22 (Automated decision-making), Article 35 (Data protection impact assessments, when required in relation to high-risk/important processing), Recital 26 (Techniques for pseudonymization and anonymization of data)
  • Automated decision-making
    • GDPR imposes a general prohibition on automated decision-making
    • Can have legal effects concerning an individual or similar, significantly serious effects on the individual
    • Some exceptions where fulfillment of a contract or explicit consent is necessary, but generally prohibition is broad
    • Individuals have the right to human intervention in certain circumstances
    • A legal effect, or significant impact, is a broad concept analyzed on a case-by-case basis and is still being understood through court cases and how different organizations apply these principles
    • Consent: For GDPR compliance, content must be explicit, freely given and informed; there must also be a means to opt out
    • Provide broad interpretations of fairness, lawfulness and transparency (e.g., making data subjects aware they will be talking to a chatbot so they know the implications of continuing and sharing information)
    • Data subject rights: Accuracy, correction and right to erasure; key components in ensuring GDPR compliance
    • No current way to remove data from the AI and have it continue to persist with its original training
    • AI models are not set to dynamically update inference based on new training data without going through a formal training process
    • Process of redress: a way for data subjects to register a formal complaint or request a review of an automated decision
    • Individuals conducting reviews must be knowledgeable of and competent with AI technology to know what to look for and accurately assess whether a decision should be overturned
    • Have logic already documented for how the AI algorithm works so that it is understandable
  • AI conformity assessments (CA) and the EU AI Act

    • Must be performed depending on the AI system or technology's risk to health, safety and fundamental rights of individuals
    • Apply to the use of AI in recruitment, biometric identification surveillance systems, safety components (e.g., medical devices), access to essential private and public services (e.g., creditworthiness, life insurance) and safety of critical infrastructure (e.g., energy, transport). The requirement is not just for cases where personal information is being processed
    • Are particular to high-risk AI systems and must take place before the system is put on the market, as well as over the lifecycle of the system
  • Data protection impact assessments (DPIAs) and CAs
    • CAs must have technical documentation; can supplement DPIAs in areas that are more technical or associated with risk
    • CAs can help identify and mitigate risks to fundamental rights and freedoms of individuals
  • AI conformity assessments (CA)

    • Must be performed depending on the AI system or technology's risk to health, safety and fundamental rights of individuals
    • Apply to the use of AI in recruitment, biometric identification surveillance systems, safety components, access to essential private and public services, and safety of critical infrastructure
    • The requirement is not just for cases where personal information is being processed
    • Are particular to high-risk AI systems and must take place before the system is put on the market, as well as over the lifecycle of the system
  • CAs must have technical documentation; can supplement data protection impact assessments (DPIAs) in areas that are more technical or associated with risk
  • CAs can envision harms that could result from AI; that data can be used to inform DPIAs
  • Anonymized data

    • GDPR does not apply; no longer considered personal information
    • Threshold for anonymization varies by jurisdiction and is high under GDPR legislation
    • AI benefit: processing vast amounts of data and relying on large datasets to deliver promised outcomes and benefits
  • Pseudonymization
    • Helpful for protecting data, still considered personal information, so obligations of GDPR apply
    • Deidentification of data can occur; utility will drop in AI
  • Anonymized and pseudonymized data can inform and train AI
  • Datasets gathered by scraping digital content often constitute personal information
  • Data scraping often occurs without end-user knowledge
  • Aspects of utilization of the system are voluntary
  • Current legislation was built without AI in mind, questions form regarding building new AI systems and if they can truly rely on pseudonymous or anonymized data
  • Ideal outcome for AI: ensure there is a way to make systems successful and achieve goals without using personal information
  • Privacy-enhancing or privacy-enabling technologies like differential privacy, homomorphic encryption, and secure multi-party computation can be used to achieve pseudonymization and anonymization, but have limitations
  • Organizations may have to trade off costs and benefits when applying privacy-enhancing technologies to AI
  • If your organization processes personal information for individuals in the EU, be certain your risk management and governance teams are aware of the requirements of the GDPR and how those requirements may affect the AI programs you choose to use
  • By addressing GDPR considerations early and thoroughly, organizations can avoid significant fines and penalties, as well as the loss of trust that accompanies them
  • Product liability law
    • Economic actors who make and sell products (retailers, distributors, manufacturers) are held responsible for the harm their products may cause
    • Fault liability regimes: Must be proven that some action or inaction by the product maker caused the harm
    • Strict liability regimes: Victims don't need to prove intentional wrongdoing or fault on the part of the product maker, only that the product was defective, and that defect caused the harm
  • Challenges to proving liability and compensating for AI-induced harm
    • Difficult to attribute harm due to the autonomous, constantly evolving and changing nature of AI systems
    • AI systems are highly complex and technical in nature, and can be opaque
  • The EU has proposed two directives to address liability for AI-induced harm: the reformed Product Liability Directive and the AI Liability Directive
  • Reformed EU Product Liability Directive
    • AI, software and digital products in scope
    • Strict liability regime: Victim must prove that damage was caused by the defective AI products
  • EU AI Liability Directive
    • Fault liability regime: Intentional wrongdoing, fault or negligence on the part of the product maker
    • Empowers courts to order disclosures of evidence about high-risk AI systems from providers
    • Courts can presume a causal link between noncompliance with relevant laws and AI-induced harm