EU Sets Global Precedent with Comprehensive AI Act for Responsible AI Use and Regulation

The European Union is set to introduce the AI Act, marking it as the world’s first extensive legislation dedicated to the regulation of artificial intelligence. This groundbreaking law aims to provide a structured framework for the development and application of AI, ensuring its benefits are harnessed safely and responsibly. Initiated as a key component of …

EU Sets Global Precedent with Comprehensive AI Act for Responsible AI Use and Regulation Read More »

The European Union is set to introduce the AI Act, marking it as the world’s first extensive legislation dedicated to the regulation of artificial intelligence. This groundbreaking law aims to provide a structured framework for the development and application of AI, ensuring its benefits are harnessed safely and responsibly.

Initiated as a key component of the EU’s digital strategy, the AI Act was proposed by the European Commission in April 2021. The Act seeks to categorize AI systems based on the level of risk they present to users, with varying degrees of regulatory oversight corresponding to these risk levels. This pioneering legislation is poised to become the first of its kind globally once enacted.

The European Parliament’s primary goal for the AI legislation is to ensure that AI systems used within the EU are safe, transparent, accountable, non-discriminatory, and eco-friendly. The Parliament emphasizes human oversight over AI systems to avoid harmful outcomes and advocates for a technology-neutral, standard definition of AI that could be applicable to future AI technologies.

The AI Act proposes different regulations based on the risk associated with various AI systems. Systems posing minimal risk will undergo assessment, while those considered unacceptable risks, such as manipulative AI or social scoring systems, will be prohibited. Exceptions might be considered for law enforcement purposes under strict conditions.

High-risk AI systems, impacting safety or fundamental rights, will be categorized into two groups: AI used in EU-regulated product safety areas (like toys and medical devices) and AI in specific sectors requiring EU database registration, including critical infrastructure and law enforcement.

Generative AI models, like ChatGPT, will need to adhere to transparency requirements, such as disclosing AI-generated content, designing models to avoid illegal content creation, and summarizing copyrighted data used in training.

Advanced general-purpose AI models, like GPT-4, would undergo extensive evaluations, and any significant incidents must be reported to the European Commission.

AI systems with limited risk will be subject to minimal transparency requirements to inform users, including awareness when interacting with AI, like in the case of deepfakes.

The provisional agreement on the AI Act was reached by the European Parliament and the Council on December 9, 2023. The agreement text now awaits formal adoption by both bodies to become EU law. Before the full Parliament vote, the agreement will be reviewed by the Parliament’s internal market and civil liberties committees.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top