On March 13, the European Parliament voted to adopt the Artificial Intelligence Act (Act), nearly three years after it was initially proposed. Once the European Council adopts the Act, and after final legislative formalities, the Act will create the world’s first comprehensive set of regulations to govern the use of artificial intelligence (AI) and will have broad effects within the EU and throughout the world.


The Act defines an “AI system” as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Notably, the definition requires an AI system to have the ability to “infer,” as this excludes from the Act systems that conduct basic data processing or which undergo modeling that is based on pre-defined functions.

The Act applies to a broad range of entities, including:

  • Providers and Deployers (each defined below) located in any country where the output of their AI system is used within the EU.
  • Providers placing AI systems on the EU market, regardless of where the Provider is located.
  • Deployers located in the EU.
  • Authorized representatives of Providers.
  • Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark.
  • Importers and distributors placing AI systems on the EU market.

Provider” means a natural or legal person that: (1) develops an AI system or a general purpose AI model (GPAI), or that has an AI system or a GPAI model developed, and (2) places the AI system or GPAI on the market or into service under its own name or trademark, whether for payment or free of charge.

Deployer” means any natural or legal person using an AI system, except where the AI system is used in the course of a personal, non-professional activity.


The Act does not apply to AI systems or their output if it is:

  • Used exclusively for military, defense, or national security purposes.
  • Put into service for the sole purpose of scientific research and development.
  • Released under free and open source licenses, unless placed on the EU market as a high-risk system, as described further below.
  • Used for research, testing, and development of the applicable AI system, but only before it is placed on the market or put into service.

Risk-Based Categorization and Obligations

Under the Act, AI systems are divided into different categories based on the inherent risks of their use. Businesses will be responsible for conducting a risk assessment to determine under which level of risk their provision or deployment of an AI system falls. Each risk category imposes different obligations on Providers and Deployers, as follows:

Unacceptable Risk/Prohibited

Description: AI systems that pose an unacceptable risk because they contradict EU values of respect for human dignity, freedom, equality, democracy, and other fundamental rights are outright prohibited, including AI systems used for:

  • Cognitive behavioral manipulation of people or specific vulnerable groups.
  • Social scoring: classifying people based on behavior, socioeconomic status, or personal characteristics.
  • Biometric categorization using certain characteristics.
  • Systems that assess or predict the risk of a natural person to commit a criminal offense based solely on the profiling of a natural person.
  • Real-time and remote biometric identification systems, such as facial recognition.
High Risk

Description: AI systems that pose a significant risk to the safety or fundamental rights of EU citizens are considered to be high risk, which could include AI systems used:

  • For the management and operation of critical infrastructure.
  • To recruit for employment or otherwise make decisions affecting terms of work-related relationships, promotion, and termination.
  • As a safety component of a product that is subject to EU harmonization legislation, including medical devices and motor vehicles.
  • For evaluating eligibility for essential public assistance benefits and services, including healthcare.
  • In the administration of justice and democratic process, including use by a judicial authority in researching and interpreting facts and the law.

Provider Obligations: The Act imposes numerous obligations on the Providers of high-risk AI systems, including:

  • Maintaining appropriate technical documentation and record-keeping, including maintaining logs generated by their AI system for at least six months.
  • Conducting a fundamental rights impact assessment before placing the system on the market.
  • Designing the AI system in such a way that it can be effectively overseen by natural persons.
  • Ensuring that the operation of the AI system is sufficiently transparent to enable Deployers to interpret the output and use it appropriately.
  • Reporting to authorities when there is an incident or malfunction that leads to a breach of fundamental rights.
  • Cooperating with authorities or the AI Office, which is a new governing body established under the Act and is responsible for the enforcement of the Act’s provisions.

Deployer Obligations: In addition, there are specific obligations on Deployers of high-risk AI systems, which include:

  • Implementing human oversight of the use of the AI system.
  • Ensuring that data input into the AI system is relevant to its use.
  • Informing the Provider of any serious incidents.
  • Cooperating with relevant authorities.
  • Complying with their obligation to carry out a data protection impact assessment under the General Data Protection Regulation (GDPR).
Limited Risk

Description: AI systems that pose only limited risk to users are required to comply with minimal transparency requirements that would allow the users to make informed decisions about whether or not they want to continue using the system.

Provider Obligations: Users must also be made aware when they are interacting with a limited-risk AI system or content it generates, including AI-generated text and deepfakes.

Deployer Obligations: None.

Minimal Risk

Description: AI systems that pose minimal to no risk to the fundamental rights of users are not subject to any additional requirements under the Act. This includes systems such as spam filters.

Provider and Deployer Obligations: None.

GPAI Models

The Act includes additional, specific requirements for AI systems which are “general-purpose AI models,” which the Act defines as “an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications,” and includes models such as GPT-3, the model underlying ChatGPT.

Providers of GPAI models are required to comply with certain transparency requirements, including:

  • Disclosing that content was generated by AI.
  • Designing the GPAI model to prevent it from generating illegal content.
  • Publishing summaries of the copyrighted data that was used for training the model.

In addition, Providers of GPAI models that pose a systemic risk of negative effects on public health, safety, security, fundamental rights, and society as a whole are also required to:

  • Perform model evaluations to identify and mitigate systemic risk.
  • Assess and mitigate possible systemic risks.
  • Provide technical documentation of the model, including training and testing processes, to the AI Office upon request.
  • Document and report serious incidents and corrective measures to the AI Office and other applicable authorities.
  • Ensure an adequate level of cybersecurity and physical protection for the GPAI model.


The Act imposes penalties for non-compliance, including:

  • Up to the greater of €35 million or 7% of total worldwide annual turnover for non-compliance with its provisions relating to prohibited AI.
  • Up to the greater of €15 million or 3% of total worldwide annual turnover for non-compliance with most other obligations under the Act.
  • Up to the greater of €7.5 million or 1% of total worldwide annual turnover for the supply of incorrect or misleading information to authorities.

Timeline of Implementation

Although the Act will likely come into force within the next few months, its provisions will be implemented over a multi-year period. Some key dates/implementations are as follows:

  • Six months after the Act goes into force: Prohibitions come into effect.
  • Twelve months after the Act goes into force: Requirements for GPAI models come into effect; however, GPAI models already on the market as of this date with have an additional 24 months to comply.
  • Twenty-four months after the Act goes into force: Requirements for most high-risk AI systems come into effect. Transparency requirements for limited-risk AI systems will come into effect as well.
  • Thirty-six months after the Act goes into force: Requirements for high-risk AI systems subject to EU harmonization legislation come into effect.

We are tracking the final versions of the Act as well as any additional guidance and codes of practice that may be issued by the European Commission over the coming months. If you have any questions on how the Act may impact your organization or how to prepare for its implementation, please contact the authors.