Home AI - Artificial Intelligence EU Enforces Ban on AI Systems Deemed to Present ‘Unacceptable Risk’

EU Enforces Ban on AI Systems Deemed to Present ‘Unacceptable Risk’

by admin

As of Sunday, the European Union’s regulators have the authority to prohibit AI systems that they identify as presenting an “unacceptable risk” or potential harm.

February 2 marks the inaugural compliance deadline for the EU’s AI Act, a comprehensive regulatory framework for artificial intelligence that was finally sanctioned by the European Parliament last March after extensive development. The Act came into effect on August 1, and this marks the commencement of the initial compliance timelines.

Details are outlined in Article 5, which broadly aims to encompass a wide range of scenarios where AI may be implemented and interact with individuals, spanning from consumer applications to physical spaces.

The EU’s framework classifies AI systems into four major risk categories: (1) Minimal risk (e.g., email spam filters) enjoys no regulatory scrutiny; (2) Limited risk, including customer service chatbots, results in a light regulatory framework; (3) High risk, such as AI used for healthcare recommendations, faces stringent oversight; and (4) Applications classified as unacceptable risk—central to this month’s compliance directives—are completely outlawed.

Some activities deemed unacceptable include:

  • AI employed for social scoring (like generating risk assessments based on an individual’s actions).
  • AI designed to covertly or deceitfully influence an individual’s decisions.
  • AI that takes advantage of vulnerabilities tied to age, disability, or economic status.
  • AI that attempts to forecast criminal behavior based on physical appearance.
  • AI utilizing biometrics to deduce personal traits, such as sexual orientation.
  • AI that gathers “real-time” biometric data in public for law enforcement purposes.
  • AI that seeks to interpret individuals’ emotions in educational or occupational settings.
  • AI that generates or enhances facial recognition datasets by scraping images from the internet or security cameras.

Companies found to be implementing any of these AI applications within the EU will face fines, regardless of their headquarters’ location. Fines may reach up to €35 million (around $36 million), or 7% of their annual revenue from the previous fiscal year, depending on which amount is higher.

However, as noted by Rob Sumroy, head of technology at the British law firm Slaughter and May, these fines won’t be enforced immediately.

“Organizations need to be compliant by February 2, but the next significant deadline is in August,” Sumroy explained in an interview with TechCrunch. “By then, we’ll know which authorities are responsible, and the fines and enforcement measures will be active.”

Initial Commitments

The February 2 deadline can be viewed as somewhat procedural.

In September, over 100 organizations signed the EU AI Pact, a voluntary commitment to adopt the principles of the AI Act prior to its practical application. Signatories, including Amazon, Google, and OpenAI, agreed to identify AI systems that are likely to be classified as high-risk under the Act.

Some technology leaders, notably Meta and Apple, chose not to join the Pact. French AI startup Mistral, a vocal critic of the AI Act, also refrained from signing.

However, this does not imply that these companies—Apple, Meta, Mistral, and others who abstained from the Pact—will evade their responsibilities, including adhering to the ban on systems deemed excessively risky. As Sumroy points out, given the specific prohibited use cases outlined, it’s likely most firms will not engage in those practices regardless.

“For organizations, a primary concern surrounding the EU AI Act is the timely delivery of clear guidelines, standards, and codes of conduct, which are essential for compliance clarity,” Sumroy stated. “So far, the working groups are meeting their deadlines for developing the code of conduct for developers.”

Potential Exceptions

There are exceptions to some of the prohibitions set forth by the AI Act.

The Act allows law enforcement to utilize specific systems that gather biometrics in public settings if such systems aid in conducting a “targeted search” for a kidnapping victim or serve to mitigate a “specific, substantial, and imminent” threat to life. This exception necessitates approval from the relevant governing authority, and the Act stipulates that law enforcement cannot make decisions that have an “adverse legal effect” on an individual solely based on the outcomes produced by these systems.

Additionally, exceptions are made for systems that deduce emotions in workplaces and educational institutions when there is a “medical or safety” rationale, such as systems intended for therapeutic applications.

The European Commission, which represents the EU’s executive branch, has announced plans to issue further guidelines in “early 2025,” after consulting with stakeholders in November. However, these guidelines have yet to be released.

Sumroy mentioned that it’s uncertain how other existing laws will correlate with the AI Act’s prohibitions and provisions. Clarity may not come until later this year, as the enforcement period nears.

“Organizations should keep in mind that AI regulation doesn’t exist in a vacuum,” Sumroy warned. “Other legal frameworks, including GDPR, NIS2, and DORA, will interact with the AI Act, giving rise to potential challenges—particularly with overlapping notification requirements. Grasping how these laws coexist will be as important as understanding the AI Act itself.”

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles