OpenAI is confronting a significant privacy challenge in Europe linked to its popular AI chatbot, ChatGPT, which has been noted for generating inaccurate information—termed “hallucinations.” Advocacy group Noyb is backing a complaint lodged by a Norwegian man, distraught after discovering that ChatGPT falsely claimed he was convicted of murdering his children. This incident raises pressing concerns for regulators, especially as it connects to broader issues of data protection under the EU’s General Data Protection Regulation (GDPR).
Previously, similar complaints have highlighted ChatGPT’s propensity for generating incorrect personal data, ranging from inaccuracies in birth dates to erroneous biographical details. A critical issue is OpenAI’s lack of mechanisms allowing individuals to amend incorrect information that the AI may produce. While the company typically offers to block false prompts, the GDPR mandates a right to rectification, underlining the necessity for data accuracy.
Noyb’s legal expert, Joakim Söderberg, emphasised that the GDPR stipulates the need for accuracy in personal data, suggesting that disclaimers about potential mistakes do not sufficient safeguard users’ rights. Confirmed violations of the GDPR can lead to severe penalties, potentially impacting OpenAI’s bottom line.
A prior GDPR intervention by Italy’s data watchdog temporarily halted ChatGPT’s access in the country, prompting OpenAI to revise the information it provides to users. Since that incident, European privacy regulators have been cautiously approaching generative AI, advocating for thoughtful consideration rather than immediate bans.
The complaint against OpenAI involves concerning hallucinations about an individual—Arve Hjalmar Holmen—linking him to serious criminal acts which are wholly false. Although some details in ChatGPT’s response were accurate, such as Holmen’s number of children and hometown, the broader implications of such harmful misinformation are alarming. Noyb made clear that such fabrications violate EU data protection laws, arguing that disclaimers cannot absolve OpenAI from producing such damaging inaccuracies to begin with.
Noyb has also cited other instances where ChatGPT generated false, damaging claims about individuals, indicating that Holmen’s case is not an isolated incident. Following a recent model update, improvements have been noted in the chatbot’s performance, with it now using online searches for information, reducing the likelihood of producing incorrect outputs about people.
Despite these changes, both Noyb and Holmen remain vigilant, fearing that erroneous data may still linger within the AI model. Noyb stresses that simply adding a disclaimer does not negate legal obligations to ensure compliance with the GDPR. The ongoing complaint has been lodged with Norway’s data protection authority, with hopes that it will address OpenAI’s accountability in creating and managing these AI-generated narratives.
As regulatory scrutiny continues, the outcome of this case could have far-reaching implications for how AI tools handle personal data and misinformation, framing the responsibilities of AI developers within the evolving landscape of data privacy law.
Fanpage:Â TechArena.au
Watch more about AI – Artificial Intelligence
