Module-8

Cards (53)

  • Tort liability framework
    Coherent framework to adapt to the unique circumstances of AI and allocate responsibility among developers, deployers and users
  • We can develop systems that respect IP rights
  • Educating users about AI
    Functions and limitations of AI systems
  • Upskilling and reskilling workforce
    To maximize the benefits of AI
  • Opt-out for non-AI alternative

    Whether this is possible
  • Certified third-party AI auditors
    Building a profession globally with consistent frameworks and standards
  • AI technologies present unconventional challenges to existing approaches as the current legal frameworks are fragmented and incomplete. However, regulating AI liability is more complicated than regulating the impacts on individuals addressed by the AI Act in EU and other forms of digital regulation, precisely because there are sophisticated and longstanding liability rules
  • AI liability regulations will most likely be based on the processes and standards of existing liability regulations and laws. Organizations should review their existing approaches for compliance and complaints and expand them to include AI use
  • The EU Commission's AI Liability Directive aims to provide similar compensation for AI-related claims as for damage incurred by other products, address specific characteristics of AI such as opacity, autonomous behavior, complexity and limited predictability, and prevent companies from contracting away their liability for the products to which they contribute
  • The EU Reform Product Liability Directive states that AI, software and digital products fall under the scope of the regulated products to which this Directive applies, with a strict liability regime where the burden of proof remains on the victim to prove that damage was caused by the defective AI products
  • The U.S. has state regulations on autonomous vehicles with specific safety standards, and the FTC has provided guidance urging businesses to be transparent with consumers about AI use and how algorithms work, ensure decisions are fair, robust and empirically sound, and hold themselves accountable for compliance, ethics, fairness and non-discrimination
  • A key challenge for AI models and data licensing will be specifying who owns the data. Protecting intellectual property (IP) rights will be critical and must be included when creating your AI model, especially when using third-party AI programs and processes
  • Data licensing terms are used to regulate designating certain model components as trade secrets, protecting model components by limiting the right to use them and designating them as confidential, including assignment rights in model evolutions, determining the license and use rights, and establishing liability and indemnification
  • Typical exceptions to the IP infringement indemnity in traditional software or technology licensing agreements do not work well for AI model licensing because modifications and combinations occur with the model by its design
  • Regarding IP rights for AI systems, a legislative framework that applies existing protections to new AI contexts is necessary. The emerging solution will most likely be to implement a distinct system of protection for the creations made by AI systems, a system in which the rights holder could be either the creator of the AI system or its user, depending on certain criteria
  • AI is affecting intellectual property in areas like copyrights for outputs generated by AI systems, data scraping, and the use of AI systems to generate trademarks. The U.S. Court of Appeals for the Federal Circuit recently determined that ONLY humans can be named as inventors on a patent
  • The impact of AI on employment sparks concerns about job displacement and automation, but AI technologies also have the potential to enhance and complement human capabilities. Certain types of jobs are at a higher risk of automation than others, depending on the industry
  • With AI increasingly automating routine tasks, workers need to acquire new skills that complement and enhance these technologies. Upskilling, social safety nets and worker protections are necessary for an equitable future of work. Individuals must engage in learning and continuous skill development, and organizations must prioritize reskilling efforts
  • While organizations use AI to help them organize personal data and facilitate many business functions, there should still be human oversight for some of these functions. However, this does not mean that individuals can require a business to have an alternative option for every automated function
  • There is no established auditing framework in place detailing AI subprocesses, and auditors are challenged with how to perform audits successfully when there are no widely adopted precedents for handling AI use cases. Internal auditors are under the auditing entity's direct subordination, which can influence the scope of the internal audit and may not provide public accountability
  • At the legislative level, the proposed Algorithmic Accountability Act in the U.S. calls for first-party audits that organizations will conduct on their own, and similar audit provisions are found in the GDPR
  • Certified third-party auditors
    Auditors who are independent from the organization being audited
  • Challenges for auditing AI systems
    • No established auditing framework in place detailing AI subprocesses
    • Auditors are challenged with how to perform audits successfully when there are no widely adopted precedents for handling AI use cases
    • Internal auditors are under the auditing entity's direct subordination, which can influence the scope of the internal audit and may not provide public accountability or verifiable assertions that the AI has passed legal or ethical standards
  • Proposed audit requirements
    • Algorithmic Accountability Act in U.S. calls for first-party audits that organizations will conduct on their own
    • GDPR has similar audit provisions
    • Federal Reserve and Office of the Comptroller of the Currency's SR 11-7 guidance on model risk management suggests an internal auditing team be different from the team developing or using the tool subject to audit
  • Third-party audits

    Audits that necessarily look backwards and will typically exhibit a range of independence from the deploying entity
  • Potential AI auditing frameworks
    • COBIT Framework
    • Institute of Internal Auditors AI Auditing Framework
    • COSO ERM Framework
    • UN Guiding Principles Reporting Framework
    • European Commission's High-Level Expert Group on AI Ethics Guidelines for Trustworthy AI
  • Bias audits
    Audits of AI systems to ensure they work without bias or discrimination
  • Organizational audits
    Audits of the rules, processes, frameworks and tools within an organization to ensure ethical and responsible development of AI
  • Areas covered in the Netherlands' AI auditing framework
    • Governance and accountability
    • Model and data
    • Privacy
    • Information technology general controls
  • Audits of automated decision systems, also called algorithmic or AI systems, are currently required by the EU's Digital Services Act, the GDPR, and required or considered in many U.S. laws
  • Audits are proposed to curb discrimination and disinformation and hold those who deploy algorithmic decision-making accountable for harms
  • The term "audit" and associated terms require more precision for interventions to work as intended
  • Markers/indicators that determine when an AI system should be subject to enhanced accountability
    • Automated decision-making
    • Sensitive data
    • Other factors
  • Automation in AI governance
    Enables an enterprise to institutionalize processes, policies, and compliance of AI deployments to continuously collect evidence, ensuring consistency, accuracy, timeliness, efficiency, cost effectiveness and scalability of AI deployment
  • Automation tools for AI governance
    • AI Verify
    • Model Card Regulatory Check
  • AI challenges traditional legal concepts, so the law needs to adapt to correspond to new developments in AI
  • AI as a work resulting from creative activity

    Protected by intellectual property as software through copyright, and under certain conditions by software patents
  • AI systems as products
    Should meet certain safety and quality standards as well as reasonable expectations of an ordinary customer
  • Liability for defective AI products
    Manufacturers and users of AI systems must take reasonable care to avoid mistakes and causing harm
  • Complications in determining liability could occur in the case of custom-made AI systems combining knowledge of a manufacturer with client's specifications