European AI Act is about to come into effect

Submitted by song on
来源:
页之码IP

On March 13, 2024, the European Parliament passed the Artificial Intelligence Act, which comprehensively addresses all issues related to the regulatory framework for artificial intelligence. This will become a landmark EU legislation in Europe.

The AI Act will come into full force in Europe at the end of 2026, 24 months after its publication. But the ban on AI systems that pose unacceptable risks will be implemented within six months, which means that it may be possible to implement it as early as 2024. In addition, the rules on general AI systems will begin to be implemented in mid-2025.


What is an AI system?

  • The AI systems mentioned in the AI Act are defined as machine-based systems designed to operate with varying degrees of autonomy. These systems exhibit adaptability for explicit or implicit goals after the systems are deployed, inferring from the inputs they receive how to produce outputs, such as predictions, content, recommendations or decisions, which may affect the physical or virtual environment.
  • The broad definition of AI systems makes the scope of application of the AI Act very flexible, covering not only software products, but also all other tools, hardware or solutions that have the potential to be considered autonomous systems that can learn and draw conclusions about new situations based on the input data previously delivered to them. In addition, this broad definition makes the AI Act applicable not only to AI products themselves, but also to any other products or solutions that use AI components.

Who is it for?

  • The AI Act applies to entities that deal with AI systems, in particular:
    • Provider – an entity that develops an AI system and places it on the market or puts it into service under its own brand.
    • Deployer - an entity that uses an AI system under its own authority.
    • In addition, the AI Act also applies to other types of entities such as importers, distributors and authorized representatives.

Prohibited AI systems

The AI Act proposes a risk-based assessment approach and sets out different requirements for different categories of AI systems depending on the extent of the impact of the AI system on individuals (natural persons affected by a given AI system).

Prohibited AI systems pose unacceptable risks. Subject to some exceptions, prohibited AI systems include:

  • Systems that use subliminal, manipulative, or deceptive techniques to influence individuals to make informed decisions;
  • social scoring systems that lead to harmful or unfavorable treatment;
  • Facial recognition systems that use face images captured randomly from the internet or CCTV footage;
  • emotion recognition systems in workplaces and educational institutions;
  • Several biometric classification systems;
  • Certain real-time remote biometric systems.

High-risk AI systems

The core category of the AI Act is high-risk AI systems, i.e. systems that may pose a significant risk of harm to the health, safety or fundamental rights of natural persons. According to the AI Act, high-risk AI systems are those covered by EU product safety legislation, including but not limited to the following systems:

  • AI systems used in critical infrastructure;
  • AI systems for education and vocational training;
  • AI systems for employment, worker management, and self-employment;
  • AI systems for law enforcement;
  • Immigration, asylum and border control management systems.

The list of high-risk AI systems in the bill is not exhaustive, which means that the sample categories of high-risk AI systems may be supplemented in the future and in line with technological trends.

In addition to high-risk AI systems, the AI Act also covers other medium- and low-risk AI systems.

Obligations for high-risk AI systems

Providers and/or deployers of high-risk AI systems will face a range of obligations, such as:

  • Implementing a risk management system throughout the life cycle of high-risk AI systems; this includes identifying and analyzing known and reasonably foreseeable risks, and taking targeted risk management measures;
  • Implement appropriate data governance mechanisms – ensuring that data used to train, validate and test AI models is fit for the intended purpose and of appropriate quality;
    Draft and maintain detailed technical documentation for AI systems;
  • Equip the system with record-keeping capabilities to automatically record events during the operation of the AI system;
  • Maintaining human oversight of AI systems;
  • Maintaining appropriate levels of accuracy, robustness, and cybersecurity in AI systems;
  • Implementing a quality management system;
  • Conducting impact or conformity assessments of certain categories of AI systems;
  • Conduct post-market surveillance – collect and analyze AI system data;
  • EU database registration obligations;
  • Information/transparency obligations for individuals affected by AI systems.

General artificial intelligence

In addition to standard AI systems, the AI Act defines a separate category: general purpose AI systems (GPAI). These are systems of a general nature that can be used in a wide range of downstream systems or applications. GPAI includes AI models that can perform a variety of tasks, are general-purpose, typically learn autonomously from large amounts of data, and can be used in a wide range of applications.

GPAI faces a separate set of requirements under the AI Act:

  • Detailed technical documentation of the implementation model;
  • Implement specific policies that respect copyright law;
  • Provide a “sufficiently detailed” summary of the content of the training dataset;
  • Obligations to label AI-generated or manipulated content.

Sanctions for Violations

The AI Act imposes severe sanctions for violations, depending on the type of violation and the size of the company. Violations of the AI Act prohibitions may result in fines of up to €35 million or 7% of global annual turnover, while failure to fulfil obligations under high-risk AI systems may result in fines of up to €15 million or 3% of global annual turnover.

Wider legal framework

Although the AI Act focuses on high-risk applications and GPAI, it is important not to lose sight of the broader implementation of AI. Companies should thoroughly evaluate the deployment of all AI technologies in their business and review technologies that were already in operation before the AI Act was passed, ensuring compliance not only with the AI Act but also with the existing legal framework. This includes privacy and data protection issues under the GDPR, as well as compliance with intellectual property, consumer protection, and anti-discrimination laws applicable to various AI systems.

套餐价格(官费和服务费) / Package fee

Get exact prices For the country / region

E-mail: mail@yezhimaip.com

Calculator