Depending on your organization's risk tolerance, you may want to apply things that you're not required to. So, even if you're not subject to the EU AI Act, for example, there are multiple best practices that you can draw from.
EU AI Act
The world's first comprehensive regulation of AI, expected to have a global impact
EU AI Act
Far-reaching provisions for organizations using, designing or deploying AI systems
Applies to all systems placed in the EU market or used in the EU, including those from providers who are not located in the EU
Regulates AI to address potential harms and ensure AI systems reflect EU values and fundamental rights
Aims to ensure legal certainty to promote investment and innovation
Aligns organizations' use of AI with EU core values and rights of individuals
Providers
Develop AI systems, sell AI systems for use or make them available through other means
Deployers
Organizations, individuals or other entities that use AI systems for specific purposes or goals
Exemptions to the EU AI Act
AI used in a military context, including national security and defense
AI used in research and development, including in the private sector
AI used by public authorities in third countries and international organizations under international agreements for law enforcement or judicial cooperation
AI used by people for non-professional reasons
Open-source AI (in some cases)
Risk levels under the EU AI Act
Unacceptable risk (prohibited)
High risk
Limited risk
Minimal or no risk
Unacceptable risk (prohibited)
Social credit scoring systems
Emotion-recognition systems in law enforcement, border patrol and educational institutions
AI that exploits a person's vulnerabilities
Behavioral manipulation; circumventing a person's free will
Untargeted scraping of facial images to use for facial recognition
Biometric categorization systems using sensitive characteristics
Specific predictive policing applications
Real-time biometric identification by law enforcement in public spaces, except certain limited, pre-authorized situations
High-risk AI systems
Require compliance with specific articles in the Act, including implementing a risk management system, managing data and data governance, monitoring performance and safety, registering in a public EU database, and developing the system to allow for human oversight
Requirements for deployers, importers and distributors of high-risk AI systems
Complete a fundamental rights impact assessment before putting the system into use, verify compliance with the Act, communicate with the provider and regulator as required, ensure the conformity assessment has been completed, monitor the system and suspend use if serious issues occur, maintain logs, assign human oversight, and cooperate with regulators
Limited risk AI systems
Primary compliance focuses on transparency, such as informing people they are interacting with an AI system and disclosing and labeling deepfake content
Minimal or no risk AI systems include spam filters, AI-enabled video games, and inventory management systems
Governance of the EU AI Act
All relevant EU laws still apply
European AI Office and AI Board established centrally at the EU level
Sectoral regulators will enforce the Act for their sector
Providers can combine or embed Act requirements in existing oversight where possible
Enforcement of the EU AI Act
Highest penalty is on prohibited AI: up to tens of millions of euros or percentage of global turnover for the preceding fiscal year, whichever is higher
Penalty for most instances will be lower than for prohibited AI, but also goes up to tens of millions of euros or percentage of global turnover for the preceding fiscal year, whichever is higher
More proportionate caps on fines for startups and small/medium-sized enterprises
The Act should apply two years after it comes into effect
Sectoral regulators will enforce AI Act for their sector
Providers can combine or embed AI Act requirements in existing oversight where possible, to prevent duplication and ease compliance
All relevant EU laws still apply
European AI Office and AI Board established centrally at the EU level
Highest penalty is on prohibited AI: up to tens of millions of euros or percentage of global turnover for the preceding fiscal year, whichever is higher
Penalty for most instances will be lower than for prohibited AI, but also goes up to tens of millions of euros or percentage of global turnover for the preceding fiscal year, whichever is higher
More proportionate caps on fines for startups and small/medium-sized enterprises
The Act should apply two years after it comes into effect, with some exceptions for specific provisions
General Purpose AI (GPAI)
An AI model that displays significant generality and can perform a wide range of distinct tasks, regardless of how the model is released on the market
GPAI can be integrated into a variety of downstream systems or applications
Obligations for "all other" GPAI
Maintaining technical documentation
Making information available to downstream providers
Complying with EU copyright law
Providing summaries of training data
Obligations for GPAI "with systemic risk"
Assessing model performance
Assessing and mitigating systemic risks
Documenting and reporting serious incidents and action(s) taken
Conducting adversarial training of the model
Ensuring security and physical protections are in place
Reporting the model's energy consumption
The EU AI Pact is an interim step where the European Commission began initiating a voluntary commitment for industry to begin complying with EU AI Act requirements before legal enforcement begins
The EU AI Pact involves industry participants taking a pledge and meeting with non-industry organizations to agree on best practices to observe in the interim
Areas to address in AI governance framework
Legal questions
Insurance
Contractual controls
Governance
Safety parameters
Japan's approach to AI governance
Published non-binding AI guidelines
Created a national AI strategy
Corporate compliance expected to align with guidelines
Includes contract guidelines for AI and data use, including model clauses
Ensures companies are operating under similar standards
Has well-developed machine-learning management guidelines in place for various sectors
Has guidelines for cloud services with specific AI applications for safety and reliability
No one-size-fits-all approach to AI regulations
Organizations must be prepared to adapt and adjust to existing and emerging AI regulation
Aspects of existing and emerging global AI regulation
Risk-based vs. rights-based
Regulatory vs. voluntary
AI, ML or both
Overarching (e.g., EU AI Act or federal laws), regional (e.g., state law), sectoral or industry regulated
Laws already in place that address AI and ML
Organizations must remain alert to regulatory requirements, both existing and emerging, where they do business
What organizations must do
Know what AI programs are in use
Identify potential risks
Have processes in place for AI governance and management
Be flexible, ready to adjust to changing requirements
Preparing for potential regulatory oversight ahead of time minimizes the need for major adjustments after programs and policies are in place. Having a legal advisor who is aware of and staying informed about upcoming laws is critical to ensure compliance is achievable.
There are many frameworks available that can help your organization design an AI risk model that is appropriate for the AI use intended. Knowing about these frameworks will allow AI governance professionals to help select aspects from each that appropriately address their organization's AI use.