Owned by Elon Musk, X, the social networking service, has recently come under fire for utilizing European Union users’ data to train its AI algorithms without obtaining the users’ permission.
A detailed observation made by a vigilant netizen late last month unveiled that X had discreetly initiated the use of its regional users’ posting data to refine its Grok AI chatbot. This discovery took the Irish Data Protection Commission (DPC)—the body overseeing X’s adherence to the General Data Protection Regulation (GDPR) within the EU—by surprise.
The GDPR, capable of imposing penalties up to 4% of a company’s yearly global revenue for verified breaches, mandates a legitimate basis for any personal data usage. X is facing accusations through nine separate complaints filed across Austria, Belgium, France, Greece, Ireland, Italy, the Netherlands, Poland, and Spain for processing European citizens’ social media posts to train its AI without securing user consent first.
Max Schrems, the leader of the privacy advocacy group noyb, which backs these complaints, remarked in a statement: “The DPC has shown a history of both inefficient and selective enforcement in the past. Our goal is for X to be fully compliant with EU regulations, necessitating user consent for this kind of data usage at the very least.”
In response to X’s actions for AI training purposes, the DPC has initiated legal proceedings in the Irish High Court, seeking an injunction to halt X’s use of the data. Nevertheless, noyb argues that the measures taken by the DPC are lacking, highlighting the absence of a mechanism for users on X to request the deletion of their data that has already been consumed. Consequently, noyb has proceeded to lodge GDPR complaints in Ireland and seven other jurisdictions.
These complaints assert that X lacks a legitimate basis for employing the data from approximately 60 million EU individuals to train AI systems without their explicit consent. It appears X is attempting to justify its AI-related data processing on the grounds of “legitimate interest,” a basis that, according to privacy specialists, requires direct consent from the individuals involved.
“Direct interaction with users necessitates straightforward consent mechanisms, such as a simple yes/no option before their data is utilized. Given its implementation for various other operations, adopting such a process for AI training is wholly feasible,” Schrems proposed.
Following the backing by noyb of several GDPR complaints, Meta recently put on hold a similar initiative to use user data for AI development after intervention by regulators.
However, X’s method of silently appropriating user data for AI enhancement, without alerting the users, managed to remain unnoticed for weeks.
The DPC has documented that X processed the data of Europeans for AI training from May 7 to August 1.
Eventually, X provided users with an opt-out feature through a setting on its web interface towards the end of July. Prior to this addition, users had no way to prevent their data from being used for AI training, an option hard to exercise if users are unaware of such activities.
This issue is crucial given that the GDPR aims to shield Europeans from unsanctioned utilization of their personal data, which might affect their rights and freedoms.
In challenging X’s chosen legal standpoint, noyb cites a decision from Europe’s highest court last summer regarding a competition grievance against Meta’s data use for advertisement targeting, in which the judges affirmed that the legitimate interest basis was unsuitable for such a use-case, stressing the necessity for user consent.
Moreover, noyb emphasizes that generative AI system providers often claim they cannot meet other fundamental GDPR obligations, like the right to erasure or the right to access one’s data, a concern also reflected in ongoing GDPR cases against OpenAI’s ChatGPT.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


