This photograph taken in Mulhouse, eastern France on October 19, 2023, shows figurines next to the ChatGPT logo. (Photo by SEBASTIEN BOZON/AFP via Getty Images)
Home AI - Artificial Intelligence Stalking Survivor Takes Legal Action Against OpenAI, Alleging ChatGPT Aggravated Her Abuser’s Delusions and Dismissed Her Alerts

Stalking Survivor Takes Legal Action Against OpenAI, Alleging ChatGPT Aggravated Her Abuser’s Delusions and Dismissed Her Alerts

by admin

A recent lawsuit in California reveals a troubling case involving a 53-year-old Silicon Valley entrepreneur who, after extensive interactions with ChatGPT, became convinced he had invented a cure for sleep apnea and believed that powerful groups were surveilling him. Allegedly, he used the AI tool to stalk and harass his ex-girlfriend, referred to as Jane Doe in the lawsuit.

Doe is now pursuing legal action against OpenAI, accusing the company of facilitating her harassment despite being warned of the user’s threats. The lawsuit details how OpenAI’s internal systems flagged the user for potentially dangerous activity, categorising it as “mass casualty weapons” use. Doe claims that despite multiple warnings, OpenAI failed to act adequately, contributing to her ordeal.

In her suit, Doe is seeking punitive damages and has requested a temporary restraining order to compel OpenAI to halt the user’s access to ChatGPT and alert her of any attempts to engage with the platform. While OpenAI has suspended the user’s account, Doe’s lawyers state that the company has not complied with the full extent of her requests, such as retaining the user’s chat logs for review.

The growing concerns about the dangers of AI technologies are highlighted by this case, alongside other legal actions against OpenAI, including cases involving individuals who harmed themselves after engaging with ChatGPT. The firm Edelson PC, representing Doe, has previously been involved in similar lawsuits, warning that AI can induce serious psychological effects.

The lawsuit coincides with OpenAI’s backing of a legislative bill aimed at limiting AI companies’ liability, which raises further questions regarding corporate responsibility in the face of potential harm caused by their technologies.

The case unfolds amid alarming incidents where AI-systems users exhibited dangerous behaviours. Reports surfaced that OpenAI had reinstated the troublesome user’s account after it had been flagged, without fully averting the risk he posed to others, including the stalking of Doe. The lawsuit indicates that the user, after becoming fixated on ChatGPT’s content, began to use it not only to rationalise his actions but also to produce false psychological reports that he then disseminated among Doe’s acquaintances.

Doe has expressed profound distress regarding her situation, stating she has lived in fear for the past seven months due to the user’s actions, enabled by the AI platform. Despite reporting the issue, she received no adequate response from OpenAI, leading to ongoing harassment, including threats. The user ultimately faced arrest and was charged with multiple felonies, but concerns remain as he has been deemed incompetent to stand trial yet is expected to be released soon.

Edelson, representing Doe, has urged OpenAI to be transparent about their safety protocols and has emphasised the need for the company to put human safety above commercial interests. This case underscores the urgency for AI companies to address potential risks posed by their technologies effectively.

Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles