On 14 June 2023, the European Parliament adopted its negotiating position on the Artificial Intelligence (AI) Act. The European Parliament’s vote on the AI Act proposal marks a significant milestone toward the regulation of AI within the European Union, as it sets the baseline for inter-institutional negotiations, as further discussed below.

The proposed AI Act follows a risk-based approach, banning AI applications that pose an unacceptable level of risk and imposing a strict regime for high-risk use cases. It also establishes obligations for AI providers and those deploying AI systems that are tailored to the level of risk posed by the AI.

The European Parliament expanded its initial list of AI systems with an unacceptable level of risk to include bans on intrusive and discriminatory uses of AI – such as real-time remote biometric identification systems in publicly accessible spaces, biometric categorization systems using sensitive characteristics (e.g., gender, race, ethnicity, citizenship status, religion and/or political orientation), predictive policing systems (based on profiling, location or past criminal behavior), emotion recognition systems in law enforcement, border management, the workplace and educational institutions, or untargeted scraping of facial images from the internet or closed-circuit television (CCTV) footage to create facial recognition databases.

Providers of ‘foundation models’ – i.e., AI models trained on a large amount of unlabeled data that can be adapted to many applications – will have to assess and mitigate possible risks to health, safety or fundamental rights and register their models in an EU database before releasing them on the EU market. In addition to this, it will be mandatory for generative AI systems based on such models to comply with a series of transparency requirements, including an obligation to disclose when content is AI-generated and to ensure safeguards against generating deep fake content.

On the other hand, the European Parliament’s position adds exemptions for research activities and AI components provided under open-source licenses, as well as the establishment in every EU member state of at least one regulatory sandbox – a controlled environment established by a public authority that facilitates the safe development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting them into service pursuant to a specific plan under regulatory supervision.

As far as citizens’ rights are concerned, the European Parliament’s position recognizes the right to file complaints about AI systems and receive explanations of decisions that are made based on high-risk AI systems that significantly impact citizens’ fundamental rights.

Following the European Parliament’s adoption of its position on the AI Act, the inter-institutional negotiations with the European Council, European governments’ representatives and the European Commission in the so-called trilogue will take place. The negotiations will kick off on 21 June, with a deal expected to be reached by November 2023.

Expected to enter into force no later than the beginning of 2024, the AI Act will not only have an extraterritorial reach – as it will be applicable to providers and users outside the EU when the output produced by the system is used within the EU – but it also foresees fines of the higher of up to 40 million euros or up to 7% of a company’s total worldwide annual turnover for the preceding financial year.

If you want to be kept informed about the ever-evolving legal developments in AI, please visit Cooley’s dedicated AI page.

Authors

Patrick Van Eecke

Enrique Gallego Capdevila

Posted by Cooley