Home AI - Artificial Intelligence Europe’s AI Regulation Officially Becomes Law

Europe’s AI Regulation Officially Becomes Law

by admin

The European Union has officially enacted its risk-based artificial intelligence regulations, effective from Thursday, August 1, 2024.

This initiative triggers the start of a phased series of compliance dates tailored to various AI developers and their projects. The majority of the regulation’s stipulations are expected to be fully enforceable by mid-2026. However, an initial compliance phase will kick off in just six months, targeting a limited array of prohibited AI applications in certain scenarios, such as the use of remote biometric surveillance by law enforcement in public areas.

The European Union’s strategy categorizes the majority of AI applications as posing low or no risk, thus exempting them from the regulatory framework altogether.

Conversely, certain AI usages are deemed high risk, including technologies like biometric and facial recognition, or AI solutions deployed in sectors such as education and recruitment. These high-risk technologies will need to be listed in an EU registry, and their creators must satisfy specific risk and quality management criteria.

AI technologies posing “limited risk,” for instance, chatbots or tools capable of generating deepfakes, will be subject to transparency mandates to safeguard users against deception.

The regulation also focuses on the manufacturers of general purpose AIs (GPAIs), adopting a risk-centric analysis. Here, most GPAI creators are subject to minimal transparency obligations, with only a fraction facing stricter scrutiny, including risk assessment and mitigation prerequisites.

The exact requirements for GPAI developers under the AI Act are still under discussion. The AI Office, a body responsible for strategic oversight and fostering the AI ecosystem, has recently initiated a consultation to develop these Codes of Practice, aiming for a completion date in April 2025.

OpenAI, the entity behind the GPT language model which powers ChatGPT, shared its insights through anoverview of the AI Act last month. The organization expressed its intention to collaborate extensively with the EU AI Office and other pertinent entities as the regulation progresses into the implementation phase, which includes the preparation of technical documents and additional resources for those applying its GPAI models.

OpenAI suggests that organizations striving for AI Act compliance should start by cataloging their AI systems within scope. It’s crucial to understand the classification of GPAI and other AI technologies in use, grasp the obligations associated with their applications, and identify whether your role pertains to a provider or deployer of relevant AI systems. Given the intricacies involved, seeking legal advice is recommended for those with inquiries.

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles