Overview
On July 12, 2024, the EU AI Act was published in the Official Journal of the European Union, establishing harmonized rules on artificial intelligence. The Act will come into force 20 days after publication, making August 1, 2024, the compliance start date for relevant entities such as providers, operators, and deployers.
Purpose and Scope
The EU AI Act aims to create an ethical and legal framework grounded in the values of the Treaty on European Union (TEU) and the EU Charter of Fundamental Rights. It addresses the challenges posed by the growing use of AI, applying harmonized rules across sectors without affecting existing Union laws on data protection, consumer protection, product safety, and employment. The Act regulates AI systems intended for the EU market, complementing existing Union laws.
Defining AI
AI, or artificial intelligence, refers to cognitive computing systems that exhibit intelligent behaviour by analysing their environments and autonomously taking actions to achieve specific goals (see Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions Artificial Intelligence for Europe, COM/2018/237 final).
AI can be software-based, like facial recognition systems, or embedded in hardware, like drones and driverless cars.
Definitions in the EU AI Act
Classification of AI According to Risk
The EU AI Act focuses on regulating high-risk AI systems, which include:
Guidelines for practical implementation and examples of high-risk and non-high-risk AI use cases will be provided within 18 months of the Act’s enforcement.
Prohibited AI Practices
The Act prohibits AI practices that:
Exception from High-Risk Qualification
The AI systems listed as high-risk in Annex III may be exempt if they do not pose significant risks to health, safety, or fundamental rights, and if they perform narrow procedural tasks, support human activities, or detect decision-making patterns without replacing human assessment. Providers must document their assessments and register the AI system in the EU database before market placement or service initiation.
This exception is crucial, as many AI system providers will seek to avoid the regulatory burden of high-risk qualification. However, providers must substantiate their claims through proper documentation.
Extensive Obligations for High-Risk AI Systems
Requirements for Providers
Providers of high-risk AI systems must adhere to stringent requirements to ensure their AI systems are trustworthy, transparent, and accountable. Before these systems can be marketed, they must undergo several checks, including:
Providers must also test these systems for conformity with the rules before market placement or service initiation and register them in a publicly accessible EU database.
Obligations for Deployers (Users) of High-Risk AI Systems
Deployers, formerly known as users, must also follow a set of obligations when using high-risk AI systems:
Right to Explanation of Individual Decision-Making
Affected individuals have the right to clear and meaningful explanations from deployers about the role of the AI system in decision-making and the main elements of the decision, especially if it significantly impacts their health, safety, or fundamental rights.
Though the GDPR already provides this right, the new EU AI Act provision raises concerns:
Thus, there is a risk that the right to explanation could increase automation bias instead of mitigating it.
Broad Right to Complain
Any natural person or legal entity can submit complaints regarding infringements of the EU AI Act to the relevant market surveillance authority. These complaints will be considered for market surveillance activities and handled according to established procedures.
This provision grants a broad scope for complaints, differing from other instruments like the GDPR, which require a direct relation to personal data processing for submitting a complaint.
Key Dates for Organizations
February 2, 2025 – Ban on Certain AI Systems
May 2, 2025 – Publication of Codes of Practice
August 2, 2025 – Obligations for General-Purpose AI Models
February 2, 2026 – Guidance for High-Risk AI Systems
August 2, 2026 – Compliance for a Subset of High-Risk AI Systems
August 2, 2027 – Compliance for Remaining High-Risk AI Systems
August 2, 2029 – First Review of the EU AI Act
December 31, 2030 – Final Compliance Deadline for Large-Scale IT AI Systems
PETERKA & PARTNERS Romania remains at your disposal to provide more information and/or related legal assistance connected to this topic.
***
No information contained in this article should be considered or interpreted in any manner as legal advice and/or the provision of legal services. This article has been prepared for the purposes of general information only. PETERKA & PARTNERS does not accept any responsibility for any omission and/or action undertaken by you and/or by any third party on the basis of the information contained herein.