top of page
Search
Writer's pictureKris Miller

The EU AI Act: What it Means for You and Your Next Steps

Introduction

The European Union (EU) Artificial Intelligence (AI) Act is fast approaching the finish line for becoming the world’s first AI regulation. The EU AI Act represents Europe’s desire to be the preeminent global leader in AI regulation, just as it led the way for privacy regulation in the General Data Protection Regulation (GDPR). The Act adopts a risk-based approach, bans certain AI systems, provides rules for general purpose AI, creates a new regulatory scheme, and provides penalties for violations. This article provides a summary of the final draft text as of early February 2024.


The Legislative Journey of the EU AI Act

On 2 February 2024, the final draft text of the AI Act was unanimously approved by the Council of the European Union (the Council). This included last minute changes requested by larger EU member states. 


The AI Act was initially proposed in April 2021. In December 2022, the European Council adopted the general orientation, which opened negotiations with the European Parliament. A year later, in December 2023, after three days of marathon talks, the Council and Parliament concluded an agreement on the final draft text. 

A plenary vote by the Parliament is expected by the middle of April 2024, following a review of the final draft text by the Committee on the Internal Market and Civil Liberties. The goal is to pass the Act before EU elections in June 2024. Upon passage, the Act will come into full force after 24 months, with several provisions becoming effective before that date.


Core Objectives

According to the Commission, “The EU’s approach to artificial intelligence centers on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights.” (Commission Digital Strategy


The AI Act is part of the EU’s Digital Decade initiative, which seeks to pursue a “human-centric, sustainable vision for digital society” that empowers both citizens and businesses. 


Internally, this initiative is characterized as a European effort to become more competitive in the global digital economy. Externally, the EU’s new digital regulations, as evidenced by application of the GDPR, can appear protectionist and punitive to non-EU technology companies that currently dominate the digital market.


Goals

The AI Act seeks to provide AI developers, deployers, and users with clear requirements and obligations regarding specific uses of AI. Concurrently, the proposal aims to reduce administrative and financial burdens for business, and in particular, small and medium-sized enterprises (SMEs).


Key Proposals in the AI Act

The AI Act will regulate the following:

  • Address risks specifically created by AI applications;

  • Propose a list of high-risk applications;

  • Set clear requirements for AI systems for high risk applications;

  • Define specific obligations for AI users and providers of high risk applications;

  • Propose a conformity assessment before the AI system is put into service or placed on the market;

  • Provide requirements regarding general purpose AI (GPAI) systems that pose systemic risk;

  • Propose enforcement after such an AI system is placed in the market;

  • Propose a governance structure at European and national level. (Commission - AI Act)

Core Provisions

All business should remain appraised of the following core provisions of the forthcoming AI Act. As with any new EU law, time will tell how this regulation is practically enforced. For now, it is essential to grasp the basic concepts and approach.


AI Systems

An AI system is defined as "a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments" (EU AI Act, Art 2(5g)(1)). This definition was recently modified to achieve alignment with the OECD definition of an AI System.


Scope

The EU AI Act will establish obligations for providers, deployers, importers, distributors and product manufacturers of AI systems, with a link to the EU market. 

The EU AI Act will apply to the following entities:

  • Providers that place on the EU market or put into service AI systems, or place on the EU market general-purpose AI models ("GPAI models");

  • Deployers of AI systems who have a place of establishment or are located in the EU, and to providers and deployers of AI systems in third countries if the output produced by an AI system is being used in the EU; 

  • Importers and distributors of AI systems; manufacturers that use AI systems in their products; and authorized representatives of AI providers;

  • Exceptions: the EU AI Act will not apply to (1) military AI systems, (2) AI systems used for the sole purpose of scientific research and development, and (3) free and open-source AI systems (unless they are prohibited or classified as high-risk AI systems).

Member States reserve the power to maintain or introduce regulations that are more favorable to workers regarding the protection of their rights in respect of the use of AI systems by employers.


Risk-Based Approach

The EU AI Act pursues a comprehensive, risk-based approach. The greater the risk of the AI system or application, the more stringent the compliance obligations.

The EU risk-based approach to AI regulation.


Unacceptable Risk - Prohibited AI Systems (Article 5)

The following AI systems are expressly prohibited under Art. 5 of the EU AI Act:

  • Subliminal techniques. “AI systems that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior” in a manner that causes physical or psychological harm;

  • Group exploitation. AI systems that exploit the vulnerabilities of a specific group of people due to their age, physical or mental disability, in order to materially distort the behavior of a person in the target group in a manner that causes or is likely to cause harm (physical or psychological).

  • Social scoring. AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons (human beings) over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, which produces a social score to the detriment of the individual.

  • Biometric identification. AI systems that us “real-time” remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement. However, biometric systems are authorized by law enforcement to achieve the following objectives:

  • Targeted search for specific potential victims of crime, including missing children;

  • The prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;

  • The detection, localisation, identification or prosecution of a perpetrator or suspect of a serious, statutorily defined criminal offenses punishable by a sentence of at least three years, as determined by the the Member State.

High Risk (Article 6)

Under the EU AI Act, an AI system is “high risk” if it is covered by Annex II or Annex III of the legislation. In short, the AI system is high risk if it is a safety component of a product, or is itself a product covered by EU harmonization legislation listed in bullets in Annex II (e.g., regulations on machinery, safety of toys, cableway installations, elevators and safety components, etc.), or it is explicitly listed as high risk in Annex III.

AI systems listed in Annex III are high risk if they impact the following:

  • Biometric identification and categorization of natural persons;

  • Management and operation of critical infrastructure;

  • Education and vocational training;

  • Employment, workers management and access to self-employment;

  • Access to and enjoyment of essential private services and public services and benefits;

  • Law enforcement, insofar as use is permitted under EU law;

  • Migration, asylum and border control management; and

  • Administration of justice and democratic processes.

Critical requirements if an AI system is “high risk”:

  • Risk management system (Article 9). A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems

  • Data and data governance (Article 10). High-risk AI systems that use techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria. Training, validation and testing data sets shall be subject to appropriate data governance and management practices.

  • Technical documentation (Article 11). Technical documentation of high-risk AI systems shall be created before the system is placed on the market or put into service and shall be kept current.

  • Record-keeping (Article 12). High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems are operating. Logging capabilities shall conform to recognized standards or common specifications.

  • Transparency and provision of information to users (Article 13). High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.

  • Human oversight (Article 14). High-risk AI systems shall be designed and developed with appropriate human-machine interface tools so that they can be effectively overseen by natural persons while the AI system is in use. 

  • Accuracy, robustness and cybersecurity (Article 15). High-risk AI systems shall be designed and developed in a way that provides an appropriate level of accuracy, robustness, and cybersecurity, and high risk AI systems must perform consistently in these respects throughout their lifecycle.

  • Conformity Assessment (Article 43). Depending on how a high risk AI system is classified (Annex II, Annex III, etc.). For example, if a system is classified under Annex II, then a conformity assessment must comply with those specified EU Regulations. If a system is classified under Annex III, a conformity assessment processes must be followed under Annex VI (Conformity Assessment Procedure Based on Internal Control) and Annex VII (Conformity Based on Assessment of Quality Management System and Assessment of Technical Documentation). Whenever a high risk AI system is substantially modified, it must undergo a new conformity assessment.

Exemption to Article 6

An exemption to Article 6 may apply if a provider considers that an AI system referred to in Annex III is not high-risk. If the provider believes the AI system does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, then the provider must document and meet at least one of the following conditions:

  • The AI system is intended to perform a narrow procedural task;

  • The AI system is intended to improve the result of a previously completed human activity;

  • The AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review; and

  • The AI system is intended to perform a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III.

In any event, an AI system is always considered high-risk if it performs profiling of natural persons. 

Steps to assess an AI system in the EU.


General Purpose AI (GPAI) Models and Systemic Risk

AI systems with limited risk are subject to certain transparency obligations. The final text revises Article 52 to address general purpose artificial intelligence (GPAI) models and systemic risk


In short, providers must ensure that AI systems that directly interact with natural persons (e.g., GPAI models like Open AI’s Chat GPT) are designed and developed in a manner so that people are informed that they are interacting with an AI system, unless it is obvious to person who is ‘reasonably well-informed, observant and circumspect.” 


Requirements

  • Transparency. GPAI systems, generating synthetic audio, image, video or text content, shall ensure the outputs of the AI system are marked in a machine- readable format and detectable as artificially generated or manipulated.

  • Emotion Recognition / Biometric systems. An emotion recognition system or a biometric categorisation system must inform users how personal data are processed in accordance with EU regulations;

  • Deep Fakes. Deep fakes are defined as an "AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful" (see Art. 3(44bl)). AI systems that generate or manipulate images, audio or video content that constitute deep fakes must disclose that the content has been artificially generated or manipulated. An exception authorizes the use of deep fakes by law enforcement to detect, prevent, investigate and prosecute crimes. Additionally, if the content is clearly an artistic, creative, satirical, or a fictional analogous work, the transparency disclosure obligation is limited such that the user is aware of the use of AI, but the notice “need not hamper the display or enjoyment of the work.” (see Art. 52(3) EU AI Act).

Classification as GPAI with Systemic Risk

A GPAI model will be classified as having “systemic risk” if the following conditions are met:

  1. It has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks; or

  2. Based on a decision of the Commission, ex officio or following a qualified alert by the scientific panel, that a general purpose AI model has capabilities or impact equivalent to those of point A above.

A GPAI model is presumed to have high impact capabilities when the cumulative amount of compute power used for its training measured in floating point operations (FLOPs) is greater than 10^25.

Providers of GPAI models with systemic risk must notify the Commission within two weeks once it is known that the requirements are met (see Art. 52b). Providers may submit arguments that despite meeting the technical threshold, their GPAI system does not pose a systemic risk.

The Commission will publish a continuously updated list of GPAI models with systemic risk, but ensure not to prejudice intellectual property rights, confidential business information, or trade secrets. 


Obligations for Providers of GPAI Models (Art. 52c)

Providers of GPAI models must:

  • Technical Documentation. Draft and keep current all technical documentation of the model, including its training and testing process and the results of its evaluation;

  • Share Information. Draft, keep current, and share available information and documentation to providers of AI systems who intend to integrate the GPAI model in their AI system;

  • Policy on IP. Create a policy to respect Union copyright law;

  • Disclose Training Content. Draft and make publicly available a sufficiently detailed summary about the content used for training of the GPAI model (the AI Office will provide a template);

  • Cooperate with Authorities. Cooperate as necessary with the Commission and the national competent authorities;

Obligations for Providers to GPAI Models with Systemic Risk (Art. 52d)

In addition to the obligations provided above, providers of GPAI models with systemic risk must:

  • Evaluation the model. Perform model evaluation in accordance with standardized protocols and tools reflecting the state of the art, including red teaming to mitigate risk;

  • Help the EU. Assess and mitigate possible systemic risks at the EU level;

  • Incident Reporting. Track, document, and report without undue delay to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures;

  • Cybersecurity and Incident Response. Ensure an adequate level of cybersecurity protection for the GPAI model with systemic risk and the physical infrastructure of the model.

Codes of Practice (Article 52e)

The AI Office will facilitate the creation of codes of practice at the EU level. The AI Office may invite outside contributors, including providers of GPAI models, relevant national competent authorities, civil society organizations, industry, academia and other relevant stakeholders, such as downstream providers and independent experts.


Future Proof

The perennial challenge of any piece of legislation is how well it ages. This is especially true for technology laws. 

By establishing general principles, the AI Act is expressly intended to be future proof. The Act seeks to enable rules to adapt to technological change. The intent is to ensure AI applications remain trustworthy after they have been deployed. This requires ongoing quality and risk management by providers of AI systems.


Penalties and Enforcement (Art. 71)

EU Member States are required to consider the interests of small and medium-sized enterprises (SMEs) and their economic viability when introducing penalty levels for violations of the EU AI Act. This includes consideration of start-ups. 


Article 5 Breach - €35,000,000 or 7% Worldwide Turnover

The maximum penalty for non-compliance with the prohibitions stated in Art. 5 of the Act is the higher of an administrative fine of up to 35 million Euros or 7% of worldwide annual turnover in the preceding year, whichever is higher.


€15,000,000 or 3% Worldwide Turnover

Penalties for breaches of certain other provisions are subject to a maximum fine of 15 million Euros or 3% of worldwide annual turnover, whichever is higher. 


€7,500,000 or 1% Worldwide Turnover.

The maximum penalty for the provision of incorrect, incomplete or misleading information is 7.5 million Euros or 1% of worldwide annual turnover, whichever is the higher.


For SMEs and start-ups, the fines for all the above are subject to the same maximum amounts or percentages, but whichever is lower..


There is also a penalty regime for providers of GPAI models under Article 72a. Here, providers of GPAI models may be subject to maximum fines of 3% of their annual worldwide turnover or €15 million, whichever is higher. Fines will be imposed if the Commission discovers that the provider intentionally or negligently infringes the Act, fails to comply with a request for documentation or information, or fails to provide access to the GPAI model for the purpose of conducting an evaluation. 

Natural and legal persons have the right to report instances of noncompliance. Thai also includes the right to request clear and meaningful explanations from the deployer of an AI system


Next Steps in the Process

The EU AI Act will enter into force on the 20th day after publication in the EU Official Journal. The Act will become effective 24 months thereafter. However, the following specific provisions will become effective sooner:

  • The prohibitions in Titles I and II (Art. 5) will apply six (6) months after entry into force; 

  • Codes of practice should be drafted nine (9) months after the AI Act enters into force;

  • Penalties will apply from 12 months after the Act comes into force;

  • Obligations for GPAI models will apply after 12 months if already on the market; and

  • Obligations for high-risk AI systems will apply after 36 months.

Member States will execute the following:

  • Designate at least one notifying authority and one market surveillance authority;

  • Identify for the Commission the identity of competent authorities and a single point of contact;

  • Make publicly available certain information about how competent authorities and the single point of contact may be contacted no later than 12 months following entry into force; and

  • Finally, each EU Member State will establish at least one regulatory sandbox within 24 months of the Act coming into force.

Next Steps for You

To prepare for the implementation of the EU AI Act, businesses should embrace a sound strategy and consistent approach to the way it integrates AI tools into its processes and offerings. If you currently do business in the EU, or if you plan to offer goods and services within the EU, which incorporates the use of AI, you will be required to develop appropriate compliance expertise and controls. 


Impact Assessment and Gap Analysis

Conduct a thorough analysis of how the EU AI Act impacts your business model, particularly if you currently use or intend to use AI models to improve your internal processes and to enhance the products and services you deliver to customers. Identify gaps in current practices and areas where new compliance requirements are likely.


AI Governance Strategy

Engage your governance, risk, and compliance (GRC) team to understand how the new AI Act will impact your current business processes. A robust strategy must be aligned with your business objectives and identify areas within the business where AI will most benefit your organization’s strategic goals. It will also require full alignment with the initiatives aimed at managing personal and non-personal data assets, in compliance with existing data protection legislation. Risks should be properly identified and mitigated, ensuring adequate monitoring and supervision throughout the AI system lifecycle.


Understand Your Data

Gain a deeper understanding of the types of data you collect, process, and store, especially related to how your data will be used in AI systems. Understanding your data will support multiple compliance imperatives under not only the pending AI Act, but also the EU Data Act and GDPR compliance.


AI Literacy

Article 4 of the AI emphasizes that “providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.” (Article 4b). If you are not preparing your company and staff for future applications of AI models now, regardless of current size, you are likely losing precious time.


Think Globally

Finally, it is crucial to consider a holistic global approach. The EU AI Act is the first of its kind, but it will not be the only global law to address AI risks and promote trust. Any truly global strategy will need to accommodate the key requirement and principles of the EU AI Act, but also look ahead to consider how other regulatory requirements may develop.


Conclusion

The EU AI Act will create a significant compliance burden for any company that decides to employ AI models that appear in the EU market. Compliance will require substantial preparation and adaptation from current business processes. 

If your organization does not yet have a mature data governance, risk, and compliance (GRC) program, a data protection strategy, or a documented approach to cybersecurity, then RevTek Solutions may be able to help ease your compliance burden when doing business internationally. Contact us here.

35 views0 comments

Recent Posts

See All

Comments


bottom of page